Document
stringlengths
395
24.5k
Source
stringclasses
6 values
How to create a treeview in html? To creating treeview we use - element with role data-role=”treeview” . Nodes are defined with - inside the How do you make a dynamic tree view in HTML? How to use it: - Create a container element to hold the tree. - Define your JSON data for the tree as follows . - Initialize the tree view plugin and define the JSON data to fetch. - Define the JSON data to fetch on demand. Does bootstrap have Treeview? Bootstrap treeview is used to show hierarchical information which starts from the root item and proceed to its children and their respective children. Each item besides the root has a parent and can have children. The parent is the node which is higher in the hierarchy and the child the one that is lower. What is DIV element in HTML? What is HTML hierarchy? The HTML of a web page is like a family tree, where the HTML tags represent various family members. HTML has tags within tags also called nested tags. The relationship amongst these tags or how they’re nested within each other is like a family tree. Is HTML hierarchical? 2.4 HTML Hierarchy An HTML document is like a big family tree, with parents, siblings, children, ancestors, and descendants. It comes from the ability to nest HTML elements within one another. What is HTML tree? Each HTML document can actually be referred to as a document tree. We describe the elements in the tree like we would describe a family tree. There are ancestors, descendants, parents, children and siblings. It is important to understand the document tree because CSS selectors use the document tree. What is Treeview in tkinter? Introduction to the Tkinter Treeview widget A Treeview widget allows you to display data in both tabular and hierarchical structures. To create a Treeview widget, you use the ttk.Treeview class: tree = ttk.Treeview(container, **options) A Treeview widget holds a list of items. Each item has one or more columns. How does div work in HTML? Why div is used in HTML? The div tag is known as Division tag. The div tag is used in HTML to make divisions of content in the web page like (text, images, header, footer, navigation bar, etc). It is used to the group of various tags of HTML so that sections can be created and style can be applied to them. What’s the idea of the easy DHTML TreeView? What makes a good tree view in CSS? The tree design is like tie sheet of competition. Then again, its appropriate to each tree see where expansiveness is relatively bigger than profundity. This also makes the tree view configuration look better thinking about the normal presentation direction. What does it mean to have a tree view? Tree View. A tree view represents a hierarchical view of information, where each item can have a number of subitems. Click on the arrow(s) to open or close the tree branches. How to increase the font size in TreeView? Snap on the ‘+’ to open a greater amount of the sub branches. Also you have the option to increase or decrease the font size by simply sliding through the range bar. 2. TreeView Pure CSS HTML Example design
OPCFW_CODE
When writing to an excel sheet, how can I add a new row each time I run the script? I've written a script that pulls and summarizes data from 2 databases regularly. I'm using write.xlsx to write 3 data frames to 3 different tabs: Tab1 is data from one database, Tab2 is data from another, and Tab3 is data summarized from both. It is fine if Tabs 1&2 are overwritten with each run, but I'd like Tab3 to add a new row (with sys.date in col1) each run, thus giving me a history of the data summaries. Is this possible and how? append=TRUE is not getting me what I'm looking for. My next option (less desirable) is to just write the summarized data to a separate file with sys.date in the filename. # Write to spreadsheet w/ 3 tabs write.xlsx(df1, file = "file_location\\filename.xlsx", sheetName = "dataset1", row.names = FALSE, append = FALSE) #write to tab1 write.xlsx(df2, file = "file_location\\filename.xlsx", sheetName = "dataset2", row.names = FALSE, append = TRUE) #write to tab2 write.xlsx(summary_df, file = "file_location\\filename.xlsx", sheetName = "summarized_data", row.names = FALSE, append = TRUE) #write to next blank row in tab3 The script is writing over the summary data with each run. I would like it to append it to the next blank row. So the openxlsx package allows you to specify not only the row to which you wish to append the new data but also to extract the last row of the current workbook sheet. The trick is to write the date and the new summary data as DataTable objects which are unique to the openxlsx package. The function writeDataTable will allow you to do this and the only required arguments are the name of workbook object, the name of the sheet in the workbook and any R data.frame object. The workbook object can be obtained using the loadWorkbook function which simply takes the path to the Excel filename. Also you can read in an entire Excel workbook with multiple sheets and then extract and modify the individual worksheets before saving them back to the original Excel workbook. To create the initial workbook so that the updates are compatible to the original workbook simply replace the write.xlsx statements with writeDataTable statements as follows: library(openxlsx) wb <- createWorkbook() writeDataTable(wb,sheet="dataset1",x=dataset1) writeDataTable(wb,sheet="dataset2",x=dataset2) writeDataTable(wb,sheet="summarized_dat",x=summary_df) saveWorkbook(wb,"file_location\\filename.xlsx") So after creating the initial workbook you can now simply load that and modify the sheets individually. Since you mentioned you can overwrite dataset1 and dataset2 I will focus on summarized_dat because you can just use the code above to overwrite those previous datasets: library(openxlsx) wb <- loadWorkbook("file_location\\filename.xlsx") Now you can use the function below to append the new summarized data with the date. For this I converted the date to a data.frame object but you could also use the writeData function. I like the writeDataTable function because you can use getTables to extract the last row easier. The output from calling getTables looks like this using one of my workbooks as an example with three tables stacked vertically: [1] "A1:P61" "A62:A63" "A64:T124" The trick is then to extract the last row number which in this case is 124. So here is a function I just wrote quickly that will do all of this for you automatically and takes a workbook object, name of the sheet you wish to modify, and the updated summary table saved as a data.frame object: Update_wb_fun<-function(wb, sheetname, newdata){ tmp_wb <- wb #creates copy if you wish to keep the original before modifying as a check table_cells <- names(getTables(tmp_wb,sheetname)) #Extracts the Excel cells for all DataTables in the worksheet. First run there will only be one but in subsequent runs there will be more and you will want the last one. lastrow <- as.numeric(gsub("[A-z]","",unlist(strsplit(table_cells[length(table_cells)],":"))[2])) #Extracts the last row start_time_row <- lastrow+1 writeDataTable(tmp_wb,sheet=sheetname,x=data.frame(Sys.Date()),startRow = start_time_row) #Appending the time stamp as DatTable. writeDataTable(tmp_wb,sheet=sheetname,x=newdata,startRow = start_time_row+2) #Appending the new data under the time stamp return(tmp_wb) } The code to extract the "lastrow" looks a little complicated but uses base R functions to extract that row number. The length function extracts the last element from example output above (e.g., "A64:T124"). The strsplit separates the string by colon and we have to unlist this to create a vector and obtain the second element (e.g., "T124"). Finally the gsub removes the letters and keeps only the row number (e.g., "124"). The as.numeric converts from character object to numeric object. So to call this function and update the "summarized_dat" worksheet do the following: test_wb <- Update_wb_fun(wb, "summarized_dat", summary_df) #Here I am preserving the original workbook object before modification but you could save to wb directly. saveWorkbook(test_wb, "file_location\\filename.xlsx", overWrite=T) #Need to use the overWrite option if filename already exists. So that is how you could append to the "summarized_dat" sheet. Before the final save you could also update the first two sheets by re-running the writeDataTable statements earlier before we wrote the function. Let me know if this is at all what you are looking for and I could modify this code easily. Good luck! I would check out the openxlsx package - it allows you to write data starting at a given row in an Excel file. The only catch is that you will need to read the current Excel file in order to determine which line is the last one containing data. You can do that with the readxl package and using nrow() of the dataframe you create from the Excel file to determine the number of rows.
STACK_EXCHANGE
I'm using game guardian on a virtual space, with root enabled I tried on a game to change diamonds amount. I searched the amount I have (21), selected "double", and it was found, choose to exchange it with 2k. And when I spent some of the (21 diamonds) it automatically changed to 2k. Hooray! Now I've tried the same method on another game "Granny's house " First of all, when tried to select its process, I found 3 copies of it not only 1, but I choose the biggest one I wan to change the money amount or the souls I searched 1052 as "double" But I found nothing I tried 10 too, but also nothing. I tried spending some and changing the amount, then searched 990, I got two results, I selected to change them to 6k and spent some, but the money amount didn't change Then I tried "auto search" and got many results in different data types so when I try to give the alternative value, it request to select one data type. So I tried them all but nothing changed So, is the method I used on the first game doesn't apply on all the games ?? And if there's another methods, is there's any tutorial for it ? I'm not expert, this what I understood from reading and watching some tutorial and then tried to do it Thank you for any help how to fix this error error script: luaj.o: load /storage/emulated/0/DCIM/SharedFolder/[0.3] for Standoff 0.18.0.lua: luaj.o: unsupported int size: 32 Caused by: luaj.o: unsupported int size: 32 ... 2 more I recently got into arm7 Lib-hacking and its working pretty well ,but I'm pretty new so there are still many things where I'm pretty unsure so I have a few questions to ask and hope that someone can help me by answer them So basically my process of hacking libs is like this : => I dump the game => find interesting Syntacs => copy the offset => copy a big hex-string => Search in Xa:Code app (Il2cpp) for the address (ex. h0A 70 10 ..... E1 ) =>edit the addrees and one below the address with a hex value i got out of the internet edit ex. 0A701090r => h0000A0E3 E5000051r => h1EFF2FE1 (trying to set to False or 0) List of hex values I use to edit 00 00 A0 E3 1E FF 2F E1 = False or number 0 01 00 A0 E3 1E FF 2F E1 = True or number 1 02 00 A0 E3 1E FF 2F E1 = Number 2 07 00 A0 E3 1E FF 2F E1 = Number 7 0A 00 A0 E3 1E FF 2F E1 = Number 10 0F 00 A0 E3 1E FF 2F E1 = Number 15 10 00 A0 E3 1E FF 2F E1 = Number 16 11 00 A0 E3 1E FF 2F E1 = Number 17 12 07 80 E3 1E FF 2F E1 = VALUE OF 12 Million or It can be used for health/ammo/armour/damage DC 0F 00 E3 1E FF 2F E1 = VALUE 4060 DC OF OF E3 1E FF 2F E1 = VALUE 120000 01 00 A0 E3 1E FF 2F E1 = VALUE 1 Also = True used for bool 00 00 A0 E3 1E FF 2F E1 = Value 0 Also = False used for bool 01 0A A0 E3 1E FF 2F E1 = 1000 01 08 A0 E3 1E FF 2F E1 = 10000 01 02 A0 E3 1E FF 2F E1 = 10000000 C2 0A 64 60 00 00 00 02 = Speed Hack 01 04 A0 E3 1E FF 2F E1 = 1000000 0E 00 A0 E3 1E FF 2F E1 = Fire Rate FF FF = Value of 65535 = Highest value 4 character hex code true_edit = "20008052r" false_edit = "00008052r" end_bool = "C0035FD6r" So my question here is ,am I doing anything wrong here? Do I need to edit 2 addresses like I do? is there a much efficient way to edit the hex values? How can I search in arm64 hex strings ???
OPCFW_CODE
CSE 461 Project 2: Routing and Congestion Control Due March 2, 1998 Design reviews February 24-25 For project 2 we will delve a little deeper into the network, and look at routing and congestion control. The network simulator has been expanded so that the network connecting senders and receivers can be modeled as a switched network consisting of many links and routers. Each of these links will simulate a simple delivery model, and you will have to write the code for the routers to efficiently discover the network topology, route packets to their destinations, and handle congestion, load variation and link failures. You will also need to be able to handle multiple simultaneous conversations on the network, and for the first time these conversations will come and go. New Failure test setup Here is a version of the square config that should work for testing New Congestion test setup Here is a config file that will exhibit congestion problems (in the second phase between 120000 and 170000). The theoretical max should be 0.5 Mb/s for both phases (split between both converstations in the New Patches available Some enhancements to the basic distribution are available. Only the changed files are provided, so you will have to copy them into your workspace if you want to use them. Project 2 Patches. The online distributions are here. Choose either the Windows or Unix version. You can use WinZip on the NT machines to unzip the Part 1a: Routing The first part of this project involves implementing a packet routing algorithm similar to that used by IP. Routing involves two You may implement any of the topology discovery algorithms discussed in lecture, or one of your own creation. For this part you will need to modify the implementations of two classes: - Topology discovery - Routing packets Your routing algorithm should, after the topology is discovered, route packets along the best route, for any reasonable definition of best - Router implements the internal nodes in the network. The current implementation uses a repeater algorithm with no - RoutingNetworkEdge implements the edge nodes. There is exactly one client (either a sender or a receiver) associated with each RoutingNetworkEdge. Edge nodes differ from internal nodes in that they are aware of the host connected to them (which gives them some additional information to contribute to the topology discover algorithm), and in that they support two interfaces: that of a generic node to the rest of the network, and that of a complete network to the client. The existing implementation simply recognizes arriving packets destined for the host, and forwards all outgoing packets to the rest of the To evaluate your routing algorithm, we will provide a few sample network topologies with simple conversations and the property that congestion is not an issue. You will need to achieve an total conversation bandwidth (the sum of the observed bandwidths of all conversations) of at least half the "theoretical maximum," which we will compute as the optimal bandwidth assuming each conversation takes the shortest route. Your algorithm will be given some initial time to stabilize the routing tables before the traffic starts. Part 1b: Failure Management Once you have routing working, we can introduce link failures. When a link fails, the two routers that used to connect to it will be immediately notified. Make sure that your algorithm can successfully deal with failures and still get packets through along the new optimal routes (assuming that the network is not partitioned by the failure). Failure management evaluation To evaluate your failure management, we will extend the routing tests so that various link failures are "scheduled" for different points in the test run, so that the topology progresses through a series of configurations. For each configuration we will compute the same "theoretical maximum" bandwidth, and you need to achieve at least half of this within some reasonable time after the failure occurs. Part 2: Congestion Control Congestion control is much more interesting. To deal with it, you will need to modify the sliding window algorithm to enable backoff. You can choose either TCP style end-to-end congestion control, or if you want you may experiment with a "pushback" algorithm of some flavor. You will need to be able to deal with conversations that come and go and recover available bandwidth, a typical situation might You do not need to implement truly fair congestion control, but your algorithm should not be unreasonably unfair. You can implement congestion control using either your solution to project 1 (which will have to be adapted to the new framework) or the sample solution (which will be set up so that the window size can be adjusted dynamically). - Long conversation 1 starts, uses all available bandwidth - Short conversation 2 starts, 1 and 2 share bandwidth more or - Short conversation 2 ends, conversation 1 reverts to using all Congestion control evaluation As before, we will provide network topologies, this time with the property that some links can be overloaded. The "theoretical maximum" will be computed assuming all conversations take the shortest path, and that all congested links are shared fairly. As before, you need to achieve at least half the theoretical value, again stabilizing within some reasonable amount of time from when the load situation changes. All through this document we have used the word "theoretical maximum" in quotes. For extra credit, we will supply a few topologies where it is possible to substantially exceed the "theoretical maximum" bandwidth, either by routing away from congested links or by splitting traffic from a single conversation. Extra credit of some form will be available for those who can substantially exceed the "theoretical maximum" on these topologies. The code for this project is substantially larger than that for the last one. The simulator has been broken into several Java packages: The other directory in the distribution is doc, which contains the javadoc-generated documentation for all the provided code. That documentation is also available online. - debug for the debugging message code - timer for the timer and associated stuff - link for the link-level parts - network for the network-level parts (which is where routing fits in) - reliability for the reliable-transport-layer parts The code is available in two distributions: a zip file for Windows/Visual J++ and a tar file for Unix. The only differences are the CR/LF convention on the source files, and the The synchronization methodology has also been changed in an attempt to make managing the parallelism easier. All operations are now synchronized exclusively through the global timer, making it substantially harder to deadlock the system. More explanation will be provided in section. - The Visual J++ version comes with a single workspace containing a project for each package/directory. Also set up in the workspace are the classes to run and the CLASSPATH modifications to find the other packages in the distribution. - The Unix version comes with a Gnu makefile and a README describing how to set up your environment and run the program. The driver for this project reads the network topology and load configuration from a config file. Documentation and the provided configs are in the directory config. The simulator arguments are: java network.project2 [seed=num] (default current time) [config=filename] (default "project2.cfg") to set the random number seed (strongly recommended) and the config file to use. Just as in project 1, this assignment should be done and handed in in groups of 2-3. Design reviews will be held on or around February 24-25. Groups will need to sign up for a timeslot and all group members will need to attend. A few review slots may be available on the 19th, talk to Online turnin will be used as in project 1. The due date is March 2. The turnin program and server will be available shortly before the due date. You may use your slip days on this assignment if you wish. Turnin is now available. Grab the file turnin.class, and turn in this project just like project 1: The turnin program prompts you for the student ID for the "first" student in the group. I don't really care whose SID you use, but make sure that if you submit more than once you keep using the same SID to identify the group. As long as you use the same SID you can submit as many times as you like. All turnins are saved, but unless we have some reason to do otherwise we will only look at the last one. Grab the file turnin.class and put it in your Make sure all your .java files should be in this directory, and no extraneous .java files are there. Make a file readme.txt in your project directory that (at the very least) gives your names, describes your algorithm, and tells us how we should run it to see it work. A sample file, which shows what I'm after, is here. Remember: we will be reading this file, and it will benefit you to tell us anything that might be helpful when grading your project. Show us that it works. Also note: short and sweet should be the readme rule of thumb. Remember, you're also turning in all your code and I can run the program if I want to see all the output. The readme should be a roadmap to your project, and should show why you think it works and tell what you learned. In your project directory, run the command (from the command line) (or jview turnin for Visual J++ systems). You will need to be on a machine which is connected to the internet. Be patient. The server is slow and single threaded. What you get back from turnin is a short receipt listing files and sizes received, the turnin time and the number of slip days used for If you have any problems with turnin, be sure to mail Andy (firstname.lastname@example.org, and/or post to the email list.
OPCFW_CODE
What URI do I need on the browser side to connect to an amazon redis server from heroku's redis addon I'm trying to set up a chat application on heroku with redis and socket.io, but I can't figure out what uri am I suppose to put on the client side. All uri's I have tried, give me a 404, name_not_resolved, or timeout erorrs. I have one heroku app, which is running a node.js buildpack, and all it does is runs the socket.js file. And I have another php heroku app which has the laravel back end with redis broadcasting and a vue front end. The broadcasting is set up so that when someone publishes a post or makes a GET request of '/', an event is fired on 'new-post-channell' and 'user-entered-chat-channel' respectively. I can go into the bash of the socket.js app and run 'node socket.js'. I can see that it connects to heroku's redis addon Amazon server and picks up on the broadcasts. I can also go into the heroku's redis-cli of the second app, into the monitor mode, and see that broadcasts are being picked up as intended. It all worked in a vagrant homestead virtual server, but doesn't on heroku. var socket = io('redis://h:oaisuhaosiufhasodiufh@ec2-99-81-167-43.eu-west-1.compute.amazonaws.com:6639'); (and maybe you also know how can I run the 'node socket.js' command on my first app automatically, so that I wouldn't have to go into the heroku's bash and run it manually?) If those are your real credentials you should invalidate them immediately. They are forever compromised, and you need to generate new ones. Editing them out of your question is not enough. No. Not real. What's the worst that could happen if they were real and if it's just a learning app? All and all.. I finally got a VPS on Vultr.com.. And ran into the same problem.. So the answer is if you have https done.. then you need to put the domain you are on. io('https://'+ window.location.hostname, {reconnect: true}); You need to navigate to your nginx configuration files and edit them.. I've set up mine in the sites-available section. /etc/nginx/sites-available/yourDomainOrIp.conf 3. You will have these sections "location" in the configuration file. Make a new one. I put mine before the others. location /socket.io/ { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass "http://localhost:3000/socket.io/"; } this section means if you visit something /socket.io/ it will be redirected to localhost port 3000. And on local host port 3000 I have the nodejs app listening var server = require('http').Server(); var io = require('socket.io')(server); var Redis = require('ioredis'); var redisNewMessage = new Redis(); var redisUserEntered = new Redis(); server.listen(3000); Soo.. I still don't know how to fully answer the question.. but basically: in that io() you would need to pass an address which eventually passes the "/socket.io/polling%something&something" to the localhost and the port of the server in which nodejs app is sitting in. That amazon link stays in the nodejs new Redis ().. Socket.io has to connect to the nodejs app file and then it all should work.
STACK_EXCHANGE
Using the right redis data types for time-based comparisons In an app I'm writing, users can perform various social actions. I'm saving the results of these actions in Redis hashes. The naming scheme of each hash employs concatenated user_ids and corresponding action_ids. E.g. hash:11:99 could be a hash storing results for user_id 11 and action_id 99. Under this scheme, retrieving results for any action performed by any user is an O(1) process (if both user_id and action_id is known). But next, I also need to look up all the results of all actions performed by a user in the last 30 mins (precise action_ids unknown). To achieve this, I'm storing action_ids alongwith timestamps in sorted sets designated for every user. E.g. sorted_set:1 could contain action_ids and timestamps for user_id 1. From here, there's a multi-step process to get all actions performed by a user within the previous 30 mins: 1) In the user's sorted set, use ZREMRANGEBYSCORE to look up action_ids that relate to the last 30 mins. Time complexity O(log(N)+M) 2) Using the retrieved action_ids, construct hash names that have to be accessed (i.e. hash:user_id:action_id). 3) Iterate over each hash and retrieve the desired result. Time complexity O(n) My question is: how can I fulfill the above requirements with better performance than above? I'm open to re-imagining which Redis data types to use. Essentially, what you're doing there, is reimplement what RDBMS can do for last 30 years (or more). One must ask, why don't you just pick the right tool for the job? @SergioTulentsev: for my particular app, redis performs much faster than Postgresql. Even in your second multi-step scenario? With proper indexes and some app-specific tuning, postgres can deliver amazing perf. And you have full power of psql :) @SergioTulentsev: haven't benchmarked the two-step scenario yet (since I haven't built it). Need to confirm from experts (i.e. you folks) if this is the fastest way to do it within the redis universe. Once I have an answer, I'll build it, test it, benchmark it. You could put it all in a redis lua script to save back and forth between client and redis I do not see myself as redis expert, but your approach looks fine to me. Using sorted set to emulate index is idiomatic, I think. @SergioTulentsev: will accept that as an answer if you post it, along with any other advice (e.g. musings on a hybrid redis-pg solution). @HassanBaig: nah, I'm good :) Some use cases require data redudancy. If you need to store partial data in these sorted sets instead of just action identifiers because this will end up in retrieving the required info in less time than an ordinary look up, Redis won't be who will tell you to don't do it. Just do it! When I said partial data I mean that I guess you're storing JSON-serialized objects or data in some other serialization format. Maybe the source object has 12 properties but when you need the latest actions done by some user during the last 30 minutes, you just need to access 4 of those 12 properties. So go for it! Store a serialized object with just 4 properties and also the id to be able to get the full object - if required - in the application layer. Furthermore, redundancy may mean that you can create 4 sorted sets storing that ranking of latest actions with different partial data based on use cases. One case requires 3 properties, other requires 2 properties but they're not the same as the first case, and so on... Just think that Redis is about indexing data in a very efficient way to access it in a breeze. AFAIK, relational database indexes work this way too. You can build many indexes with many columns and with all possible combinations against the same data table. With Redis, you can get the same behavior and goal your own way because you decide how to model these indexes!
STACK_EXCHANGE
Thanks - and you are correct! DavDroid uses the path returned from the server root to dictate where to put the relative path on PROPFIND calls - changing it to dir/ results in it working fine. Great app. Synchronization Error - Couldn't delete local events For an .ics feed, for which syncing with ICSDroid has worked fine for some time, I’m now getting a Synchronization Error: “Couldn’t delete local events”. The calendar behaves as the error message indicates: new events are synced to the calendar, but events which are deleted from the .ics file are not deleted from the local Android calendar when syncing. I’ve already tried to delete and recreate this synchronization in ICSDroid as well as the calendar, I still get this error message. Some other .ics feeds synced with ICSDroid work fine. As I suspected some entries in the calendar might be the issue, I’ve removed the recent entries entries from the .ics file, and voilà - it syncs again. Just to verify I’ve created an .ics file which only contains these recent events, but than it also syncs fine!? To further narrow down the issue I’ve split the .ics file, and than both parts sync fine. As the calendar file has grown to a little bit over 4MB (~ 4.2 MB), my suspicion would be there is an issue with .ics files bigger than 4MB?! Any help or workaround gladly received, thanks! Samsung Galaxy S5, Android 5.1.1 Sounds like this SQL statement becomes too large for IPC (1 MB) for more than 4 MB of events … Good catch, seems to be the issue based on the Exception thrown! I’m surprised this hasn’t already surfaced, because a 4MB .ics file isn’t that uncommon?! I’m neither to deep in programming and my SQL skills are quite rusty, but looking at the statement it seems “sqlUIDs” is the one getting quite large, correct? Could it be a workaround replacing the “sqlUIDs” and “NOT IN” construct by a “LEFT OUTER JOIN” or a “NOT EXISTS”?! Just my 2 cents… I have sent an APK over email. Please tell me whether it now works for you. Unfortunately trying to install the APK you’ve sent ends in “App not installed”. I’ve sent a test file I’ve created which yields the same behavior ending in a “Couldn’t delete local events” via email. You need to uninstall the old one and install the new one. @devvv4ever Wouldn’t I loose my configuration if I uninstall the old one?! Of course, but it is not possible otherwise to test it… My suggestion: create some QR codes of your URLs and save them. After it scan and open it directly with ICSdroid - there is an intent so that you can directly add an ics file easily. Now works with 1.5 and your test file.
OPCFW_CODE
In a monolith application, a single error can bring down the entire system. This risk is reduced in a microservices architecture because it uses smaller, independently deployable units that don’t affect the whole application or system. Does that mean a microservices architecture is immune to failures? No, not at all. Simply converting a monolith into microservices doesn’t automatically fix all issues. Microservices heavily rely on distributed systems, making resiliency critical to their design and performance. When architecting distributed cloud applications, it’s crucial to anticipate failures and design your applications with resiliency in mind. Microservices are likely to fail at some point, so it’s essential to be prepared for failures. Don’t assume everything will always go smoothly. Plan for rainy days, snowstorms, and other adverse conditions. In short, design your microservices to handle failures. Having said that, we’ll discuss fault tolerance and failure recovery in microservices and how to achieve them. But first, let’s clear the basics! Understanding Microservices Resilience Resilience in microservices refers to an application’s ability to withstand failures, stay available, and deliver consistent performance in distributed environments. Resilience patterns are established mechanisms that empower applications to handle failures gracefully, ensuring stability in complex, distributed systems. By using these patterns, developers can reduce the impact of unexpected errors or high loads, leading to less downtime and better overall performance. In distributed systems, failures are unavoidable due to various factors like network issues, unresponsive services, or hardware problems. Hence, it’s essential to acknowledge these uncertainties and develop strategies to manage them effectively. This is where resilience patterns come into the picture, helping create fault-tolerant systems that respond well to failures, ensuring the application remains available and functional. Implementing resilience patterns in microservices offers several key benefits: - Minimized Service Downtime: These patterns help applications recover quickly from failures, minimizing disruptions and ensuring high availability for users. - Improved Fault Isolation: By using resilience patterns, developers can isolate failures, preventing them from spreading and causing widespread issues. - Consistent System Performance: A resilient microservices application can maintain consistent performance, even under high load or network issues. - Enhanced User Satisfaction: Reliable performance improves user experience, building trust and loyalty. However, there are several factors that can break microservices’ resiliency, such as improper implementation of resilience patterns, network failures, and dependencies on external services with insufficient failover mechanisms. Common Challenges that Break Resiliency in Microservices Let’s take a look at some of these common issues in microservices resilience: - Service Failures: Microservices are spread across different containers or machines, making them susceptible to bugs, hardware issues, or failures in external dependencies. - Network Failures: Communication between microservices happens over networks, leading to problems like increased latency, packet loss, or temporary unavailability of services. - Dependency Management: Microservices often depend on each other for various functionalities. Managing these dependencies and ensuring services can handle changes or failures in their dependencies is complex. - Data Consistency: Maintaining data consistency in distributed databases used by microservices can be challenging. Balancing consistency with partition tolerance, as per the CAP theorem, is crucial. - Scalability Challenges: Although microservices allow for independent scaling, managing dynamic scaling to meet varying demands without causing bottlenecks or resource wastage is challenging. - Cascading Failures: A failure in one microservice can trigger a chain reaction of failures in dependent services if proper precautions are not taken, leading to cascading failures. Why Traditional Approaches to Resilience Might Not Be Enough Traditional architectures and designs were not made for the complexity and distribution of microservices. So, traditional resilience approaches, like redundancy in one application or relying on a single powerful server, might not be sufficient for microservices for several reasons: - Complexity: Microservices bring more complexity because they are distributed. Traditional methods that work in simpler architectures may struggle with microservices’ complexities, such as managing service dependencies and handling network issues. - Single Point of Failure: Traditional methods often rely on one central system or server. If that fails, the whole application can go down. In microservices, the aim is to avoid this by having redundancy at different levels. - Resource Efficiency: Microservices allow for better resource use by scaling individual services independently. Traditional methods are less efficient because they scale entire applications, leading to unused resources. - Elasticity: Microservices can scale up and down quickly based on demand. Traditional systems may not be as elastic and can’t adapt as fast. - Isolation and Containment: Microservices need to be isolated to prevent failures from spreading. Traditional methods might not have the right mechanisms for this. Now that we have a fair understanding of why traditional approaches won’t work and what breaks resiliency in microservices, let’s study the different strategies for fault tolerance and failure recovery in microservices. Innovation in your Inbox Subscribe to receive insights like these directly to your inbox every month Continue Reading This Article Best Practices for Ensuring Resilience in Microservices As mentioned before, inter-service communication is one of the most common breaking points in microservices architecture. When multiple services collaborate to accomplish a task, errors can occur during this communication. To ensure the fault tolerance of microservices, it is crucial to address these potential errors and establish a reliable communication mechanism. Microservices architecture employs two main types of communication: synchronous and asynchronous. Asynchronous communication, which uses intermediaries like message queues, is inherently more fault-tolerant. These intermediaries decouple services and provide a buffer that can handle intermittent failures, making asynchronous communication naturally suited for fault tolerance. On the other hand, synchronous communication requires immediate responses and can be more susceptible to failures. However, several patterns and techniques can be employed to make synchronous communication more fault-tolerant: - Circuit Breaker - Implement Statelessness and Idempotence - Adopt Observability and Monitoring Let’s take a look at each one of these strategies! Implementing timeouts in microservices helps prevent prolonged waits for a response, which can occur due to network issues or unresponsive services. By setting a timeout, a service specifies the maximum amount of time it is willing to wait for a response before considering the operation failed. This helps in freeing up resources and ensures that the system remains responsive. Example in Python using the requests library: response = requests.get(url, timeout=5) #Process the response #Handle timeout error In this example, the timeout=5 parameter specifies a timeout of 5 seconds for the get request. If the server does not respond within 5 seconds, a Timeout exception is raised, allowing the application to handle the timeout gracefully. Retrying failed operations is a common strategy to improve the robustness of microservices. By retrying a failed operation, the service has another chance to succeed, especially in cases where the failure is transient. Example in Python using the retrying library: from retrying import retry #Risky operation that might fail In this example, the @retry decorator is used to automatically retry the risky_operation function up to 3 times in case of failure. This helps in increasing the chances of the operation succeeding, especially in scenarios where the failure is temporary. 3. Circuit Breaker The circuit breaker pattern is used to prevent repeated calls to a failing service, which can overload the system and worsen the situation. The circuit breaker monitors the status of the service and “opens” the circuit when it detects a failure. Subsequent calls are then “short-circuited” and fail immediately, without making a request to the service. Example in Python using the circuitbreaker library: from circuitbreaker import circuit #Risky operation that might fail In this example, the @circuit decorator is used to create a circuit breaker for the risky_operation function. If the failure threshold (50% in this case) is reached, the circuit breaker opens, and subsequent calls to risky_operation fail immediately, without making a request to the service. This helps in preventing the system from overloading during periods of high failure rates. 4. Implement Statelessness and Idempotence Minimize the impact of failures on data and system state by designing services to be stateless and idempotent. A stateless service does not store internal state but relies on external sources for data persistence. This simplifies service recovery and scalability, reducing the risk of data loss or corruption. An idempotent service can handle repeated requests without changing the outcome, ensuring consistent behavior regardless of request volume or order. 5. Adopt Observability and Monitoring Use observability and monitoring tools to collect, analyze, and visualize data and metrics about your services and system. These tools help you understand the performance, health, and behavior of your system, enabling you to identify and resolve issues quickly. Logs, traces, alerts, dashboards, and reports are valuable for troubleshooting, optimizing, and improving the reliability of your microservices architecture. Effective Failure Recovery Mechanisms In a dynamic microservices environment, ensuring resilience against failures is paramount. Effective failure recovery mechanisms play a crucial role in maintaining system integrity and minimizing downtime. Here, we explore key strategies and tools that can help your microservices architecture recover swiftly and reliably from failures. - Logging and Monitoring: Comprehensive logging and real-time monitoring are crucial for early detection of failures. Tools like Prometheus and Grafana offer insights into system health and performance, enabling quick identification and resolution of issues. - Service Meshes: Service meshes like Istio or Linkerd provide an additional layer of infrastructure that manages service communication, offering resilience features like retries, load balancing, and circuit breaking out of the box. - Disaster Recovery Planning: Having a robust disaster recovery plan, including regular backups and clearly defined recovery procedures, ensures that services can be restored to operation with minimal downtime. - Backup Strategies: Regular, systematic backups of data and configurations are essential for quick recovery from data loss or corruption. The effectiveness of a backup strategy is often measured by its recovery point objective (RPO) and recovery time objective (RTO), which indicate how much data loss is acceptable and how quickly systems should be restored after a failure. As microservices continue to evolve, so do the strategies for ensuring their resilience. The adoption of AI and machine learning for predictive analytics is on the rise, offering the potential to preemptively identify and mitigate potential system failures before they impact users. Additionally, the growing emphasis on observability over simple monitoring provides deeper insights into system behavior and performance, leading to more proactive and effective resilience strategies. Ensuring resilience in microservices architectures is key to maintaining high availability, performance, and customer satisfaction. By implementing comprehensive fault tolerance and failure recovery strategies, organizations can protect their systems against inevitable failures and minimize their impact. As technology evolves, so will the tools and techniques for building resilient systems, requiring ongoing attention and adaptation to best practices in microservices architecture. TechBlocks specializes in designing and building scalable microservices architectures, modernizing legacy systems, and implementing cloud-native technologies and DevOps practices. With our expertise, businesses can enhance their agility, scalability, and overall resilience in the digital landscape. Ready to boost your microservices for resilience and performance? Contact us today to discuss how we can help you design and implement a scalable, modernized solution tailored to your business needs.
OPCFW_CODE
A First Course In Machine Learning Matlab Code This is an introductory article on Machine Learning MatLab’s First Course in Machine Learning. This is a first course in machine learning. We will cover the following topics: How To Implement In Matlab How to use Matlab via C++ What is Matlab? check these guys out is an open source software library for programming. It is an open-source program that is used to create, manage, test, benchmark, and run the programs that build, manage, and run all the code and functionality of the program. How Matlab Works While Matlab is an Open Source software library, it is not a framework for use in a computer. The program is written in C++, but has been converted into a C API using Matlab’s function ‘GetFunction()’. Why Matlab Works? The program is written using a C++ library. This library has been converted to a C API by Matlab‘s function “GetFunction()”. What Matlab Does The following 3 questions are being asked as part of the Second Course. What should I do to get the results of my function? What do I need to do to get to the results of the function? We have the following functions in R, C, and Matlab. These are represented as names: function (Function) This function will be called when a function call is run. Function calls are run by calling the function. Function calls will be done with the call to the function and will run in parallel. 2.1 The GetFunction() function The GetFunction() method is a function that takes a function argument and returns a function pointer. The GetFunction function is a function for passing a function pointer to a function call. The function object is represented as a C callable. A function object is a function object that may be used as a parameter of a function call, and may be used to pass a function pointer as argument to a function. This function is called when a call to the GetFunction() call is run, and returns the result of calling the function as a function pointer (the back-reference to the calling function object). The function object returned by GetFunction() is the returned function object. How To Use Matlab For Machine Learning Implementation We have an implementation of GetFunction() that implements the function that we have mentioned. The implementation of this function is called by calling the GetFunction function. To make this work we need to use a callable function object. The instance of the function object represents a function call that is run by the GetFunction method. The function call has the following parameters: The function reference to a function object is the value of the function argument. This is the value that is passed to the Get function. This parameter is the value passed to the function call. The Function We are using the function like this: Function = GetFunction(f); We can first see that the function is being called. A callable function is a callable object that represents a function object and can be called by a function. The Get function should be called by calling GetFunction() and the Get function should return the result of the call or be called if the call is not successful. Here is a sample function call: //call to a function with this name function GetFunction(name) //will get result of a function called with this name, then return return GetFunction(GetFunction(name)); //and return this result, so it is passed to this function return new function(name); //for now make sure we have a function reference to this object function f = GetFunction(); We then have a function call (f’s parameter) that is called with the name given in the GetFunction argument. The returned function object will be a function object. This function type is called as a reference to the Get Function itself. We can now call the function without the Get function argument, and with the Get function reference. 3. The Interface to the Function A function is a method that returns an object of the same type as the function. We can use the functions provided by the function to pass a pointer toA First Course In Machine Learning Matlab Code Siemens is a one-stop shop for programming, machine learning, and machine learning technologies. In this tutorial, I’ll take you through the basics of the programming language MATLAB. Introduction The basic idea of programming is to make a program that can be tested, tested in a few steps to make it a “test” program. This is a very basic concept, but it’s one that is extremely useful and taught at many universities and businesses. How Do I Get A License For Matlab? In this tutorial, you’ll learn the basics of programming. In this case, you‘ll learn how MATLAB can be used to write programs. Background In the previous tutorial, I wrote a small tutorial that can be used with two other programs. In this one, you“ll start with a simple example of computing an example of a particular function. First, we“ll write the function. This example is about computing a function such as: function f(x) is function my_example(x) called is_f(x) is The function is a simple object that we can use to call the function. We“ll call the function with our input x, the function name with the function name, and the result of the function. Then we“re doing the same thing with the function. The function is: my_example(1) The output is: 1 The second example is about a simple program that uses Matlab to do a simple data analysis. We have two input values x = [x1,x2,x3] We want to call the following function: f(x1) f(y1) my_exact_x(x1,y1) is my_Example(1) my_Example(2) my_Exact_x The end result is: 2 The first example is about the efficiency of the output of the function: f(1) = 0.1 f(2) = 0 The results are: 2, 0.0001 0.0001 The third example is about how to use the function: f(x2) f(-x2) = f(x1+x2) + f(x3) f(-1) = f(-2) = -1 f(-2) = f(-3) f We“ll again call the function to give a few more examples. The main idea of the function is to use Matlab to perform some calculations. If we want to do some calculations, we’ll do a simple multiplication: x = 1 y = 2 Then, we‘ll call the multiplication function: x.multiply(y) We can use Matlab‘s “add” and “sub” operator to add and subtract a number to x and y. Matlab is a very powerful tool for making calculations and passing data to and from the Matlab code. Also, we can use the “bind” operator to bind two or more calculations. Let“reform” the function. This is how Matlab usually performs calculations. Matlab Basics Tutorial First, you”ll create a list of variables. We can use the function name to show the list of variables that we want to change. We can also use the function ID to add or remove a value to any of the list of variable. Second, you� “reform the function with a dot operator to get the result of a particular operation.” We use the function like this: % function f(x,y) f = my_example.f(x, y) Third, we use the function to compute an example. Next, we”ll do some operations. f = f(1) + 1 f = 2 + 1 f is a simple function that we can call by using the dot operator. So we“ve to do the same thing, but with the function ID.A First Course In Machine Learning Matlab Code These are the first two posts in a series to cover the basics of machine learning algorithms and the basics of how to implement them. We’ll also cover more work and more code in the next two posts. The first post is about Machine Learning, specifically the machine learning framework used to perform a classification task. The framework uses a variety of programming languages to perform tasks that are useful when solving problems. The framework is a bit different from the rest of the book, but a basic understanding of the basics of the framework imp source be found in the first two articles. I’ll start with the basics of Machine Learning. Machine Learning First, let’s take a look at some of the basic details of machine learning. Human-like Models The basic difference between human-like and machine-like models are the following: The human-like models assume that you know how to model a piece of data, but not necessarily how to evaluate the piece of data. They use a variety of computer programs to do this. A computer program, named “sparse”, is an example of a human-like model. As you might have noticed in previous chapters, humans are very similar to machine-like objects. Matlab Programming Tutorial Pdf We‘ll take a look. Sparse Models Sparrows are a computer program that takes a problem and a set of values and outputs the result. You use a program called “aggregation” to aggregate each value into a set. Aggregation is a collection of several operations. The most popular method is to use the “aggregator” command. Using the “Aggregation” command, you can get the value for each piece of data and how to evaluate each piece of value. Each piece of data has a 5-by-5 array of integers. You can pull out the values or use the ‘expand’ command to expand the array. Applying the “expand” command to each piece of input data can be a very handy way to evaluate the value. An example of the application is as follows: With each piece of information you could also get the values of the piece of information. Here is an example: You can see that the piece of the piece is a 5 by 5 array, which is the square of the number of its elements. You could also plot the square of a piece of information on this square of the piece to see how many square-sized pieces it has: As the square of an array is not an integer, the square of each piece of the array will have a 4 by 4 array of the same size. With these pieces of information you can easily evaluate the piece in the target area, with the “Expand” and “Expander” commands. What are the steps to use the Aggregation command? Aggregator The “Aggregator“ command is a program that takes an array of integers and a 5- by 5 array of integers to aggregate. The value for each array is called an aggregate. Since each entry in the array is a 5-element array, the aggregate value for the element is 5 by 5. To use the ”Aggregation“ command, open the “Evaluation” page. In the evaluation page, you can see that you have the amount of information you need to evaluate the item. It’s a little rough to determine how many pieces of information are needed. But, it’s this way: To get the aggregate value, you need to get all the pieces of the item, without the aggregate. Teach Yourself Matlab If you want to get the aggregate for the entire item, you can do If you have a piece of the item in the array, which contains the 5-by 5 array of the item’s pieces, you can use If the piece in your array is: What is the value of the piece in that piece? Here are two examples. This is a very simple example. If your piece of information in the array Related Site a piece of 4 pieces in
OPCFW_CODE
- Volume 14 Issue 11 DOI QR Code Reuse of Input Queue Item Towards Economical Agile Reuse 절약형 애자일 재사용을 향한 입력 대기열 항목의 재사용 - Kim, Ji-Hong (Dept. of Computer Engineering, College of IT, Gachon University) - 김지홍 (가천대학교 IT대학 컴퓨터공학과) - Received : 2016.09.29 - Accepted : 2016.11.20 - Published : 2016.11.28 The aim of the study is to combine software reuse with agile methods through reuse in the early stage of agile development. Although agile methods and software reuse have different practices and principles, these methods have common goals, such as reducing development time and costs and improving productivity. Both approaches are expected to serve as viable solutions to the demand for fast development or embracing requirement changes in the rapidly changing environments. In the present paper, we identify economical agile reuse and its type and study a reuse technique for input queue in Kanban board at the early stage of hybrid agile methods. Based on our results, we can integrate software reuse with agile methods by backlog factoring for input queue item in the hybrid Scrum and Kanban method. The proposed technique can be effectively applied to e-class applications and can reuse the input queue items, showing the combination of the two approaches. With this study, we intend to contribute to reuse in the early stage of agile development. In the future, we plan to develop a software tool for economical agile reuse. - R. Carbon, M. Lindvall, D. Muthig, P. Costa, "Integrating product line engineering and agile methods: flexible design up-front vs. incremental design", 1st International Workshop on Agile Product Line Engineering(APLE06), 2006. - Kircher, M., Hofman, P., "Combining Systematic Reuse with Agile Development Experience Report", Proceedings of the 16th International Software Product Line Conference-Volume 1. ACM, pp.215-219, 2012. - Mahnic, V., "Improving Software Development through Combination of Scrum and Kanban". Recent Advances in Computer Engineering, Communications and Information Technology, Espanha, 2014. - Ahmad, M. O., Kuvaja, P., Oivo, M., & Markkula, J., "Transition of software maintenance teams from Scrum to Kanban", Hawaii International Conference on System Sciences (HICSS), 2016. - Ian Sommerville, "Software Engineering, 10th Ed.", pp.75-76, 425-428, Pearson, 2016. - Shari L. Pfleeger, Joanne M. Atlee, "Software Engineering 4th Ed.", pp.627-636, Pearson, 2010. - Hee-Soo Kim, Hae-Sool Yang, "A study on the utilizing of automation migration tool through existing system reuse of enterprise", Journal of Digital Convergence, Vol. 12, No. 11, pp.317-327, 2014. https://doi.org/10.14400/JDC.2014.12.11.317 - D.H. Kim, Koh Chan, D.S. Kim, H.W. Kim, "A Study on the Agile-based Information System Audit Model", The Journal of Digital Policy and Management, Vol. 11, No. 8, pp.95-108, 2013. - Campanelli, A. S., "A Model for Agile Method Tailoring", Projetos e Dissertacoes em Sistemas de Informacao e Gestao do Conhecimento, 2014. - Klaus Pohl, van der Linden F., "Software Product Line Engineering", pp.13-14, Springer, 2005. - Ji-Hong Kim, "Backlog Factoring : Extension of Task Factoring for Reuse in Scrum Method", The Journal of Digital Policy and Management, Vol. 10, No. 10, pp.339-345, 2012. - Tian K., Cooper K., "Agile and software product line methods: Are they so different?", 1st international workshop on agile product line engineering, 2006. - MyounJae Lee, "A Game Design for IoT environment", Journal of the Korea Convergence Society, Vol. 6, No. 4, pp.133-138, 2015. https://doi.org/10.15207/JKCS.2015.6.4.133 - Seong-Hoon Lee, Dong-Woo Lee, "Actual Cases for Smart Fusion Industry based on Internet of Thing", Journal of the Korea Convergence Society, Vol. 7, No. 2, pp.1-6, 2016. https://doi.org/10.15207/JKCS.2016.7.2.001 - Dhakshinamoorthy R., Thirunavukarasu S., "Turbo Charging IoT Projects with Agile Scrum Methodology", http://www.tcs.com, 2016. - Uyeong Jeong, Youngkwan Ju, Joongnam Jeon, "A Porting Technique of WiFi Device on Android Platform", Journal of IT Convergence Society for SMB, Vol. 2, No. 1, pp.51-58, 2012. - B. Speckmann, "The Android mobile platform", Master Thesis, Eastern Michigan University, 2008. - Jin-Soo Park, Jang-Jin Kwon, Jang-Eui Hong, Min Choi, "Software Architecture Recovery for Android Application Reuse", Journal of IT Convergence Society for SMB, Vol. 3, No. 2, pp.9-17, 2013. - Ruiz, I. J. M., Nagappan, M., Adams, B., Hassan, A. E., "Understanding reuse in the android market". Program Comprehension (ICPC), IEEE 20th International Conference, IEEE, 2012. - Ken Schwaber and Jeff Sutherland, "Software in 30 days", p.61, Wiley, 2012. - Kniberg, H., & Skarin, M., "Kanban and Scrum-making the most of both", pp.16, Lulu.com, 2010. - Anderson, D. J., "Kanban: successful evolutionary change for your technology business", Blue Hole Press, 2010. - Ladas, C., "Scrumban-essays on kanban systems for lean software development", Lulu.com, 2009. - Wang, X., Conboy, K., & Cawley, O.,""Leagile" software development: An experience report analysis of the application of lean approaches in agile software development", Journal of Systems and Software, 85(6), pp.1287-1299, 2012. https://doi.org/10.1016/j.jss.2012.01.061 - Diaz, J., Perez, J., Yague, A., & Garbajosaz, J., "Tailoring the Scrum Development Process to Address Agile Product Line Engineering", Proceedings of Jornadas de Ingenieria del Software y base de Datos, 2011. - Martini, A., Pareto, L., Bosch, J., "Enablers and Inhibitors for Speed with Reuse", Proceedings of the 16th International Software Product Line Conference-Volume 1, ACM, pp.116-125, 2012. - Martini, A., Pareto, L., Bosch, J., "Communication factors for speed and reuse in large scale agile software development", Proceedings of the 17th international software product line conference, ACM, pp.42-51, 2013. - da Silva, I. F., Neto, P. A. D. M. S., O'Leary, P., de Almeida, E. S., & de Lemos Meira, S. R., "Using a multi-method approach to understand Agile software product lines", Information and Software Technology, pp.527-542, 2015.
OPCFW_CODE
// // Calendar+extensions.swift // appstarter-pod-ios // // Created by Gabrielle Earnshaw on 07/08/2019. // import Foundation public extension Calendar { /// Takes an array of components with a date, and returns a unique list of /// days covered by the components. /// NB the return value contains the time for the start of the day, which is dependent /// on the calendar (and specifically the timezone of the calendar) /// /// - Parameter dateables: a list of components with dates /// - Returns: A list of unique dates covered by the components, sorted in ascending date order func getUniqueDays(dateds: [Dated]) -> [Date] { let startOfDays = dateds.map { startOfDay(for: $0.date) } let uniques = Set(startOfDays) return Array(uniques) .sorted() } }
STACK_EDU
The python string count function is used to get the number of occurrences of a substring in the given string. The means count() method will search the substring in the given string and returns how many times the substring is had it. Note: count() function is case sensitive, which means if you find caps lock word then it will count only the same. string.count(value, start, end) - value(substring):– string whose count is to be found. - start:- The position to start the search. Default is 0 (Optional) - end:– The position to end the search. Default is the end of the string The number of occurrences of the substring in the given string. String count function example in Python An example of count number of occurrences in string in python. We are not using a start and end limit for this example. Note: Index in Python starts from 0, not 1. Search “Python” in the whole string. txt = "Python is programing language. Python is easy. Learn Free Python " x = txt.count("Python") print(x) Count word occurrences in string substring using start and end in python Search from position 0 to 18: txt = "Python is programing language. Python is easy. Learn Free Python " x = txt.count("Python", 0, 18) print(x) Python count string length Use len() function to get the length of a string. See below example:- str = "Hello Python" print(len(str)) Read more examples:– Python length of a list Q: How to count total characters in string python? Answer: To get total characters in the string you have to use the string len() function. str1 = "Hello" x = len(str1) print(x) Q: Count overlapping substrings python. Answer: Count() function does not count the overlapping strings. For this, we need to write our own function definition. Keep a count variable to store the count and pos to track the starting index of the sub-string. When the sub-string is encountered, increment the counter and check from the next index. This is how we calculate the overlapping substrings. def frequencyCount(string, substr): count = 0 pos = 0 while (True): pos = string.find(substr, pos) if pos > -1: count = count + 1 pos += 1 else: break return count print("The count is: ", frequencyCount("thatthathat", "that")) Do comment if you have any doubts and suggestions on this tutorial. IDE: PyCharm 2020.1 (Community Edition) All Python Examples string.count python 3, so it may change its different from python 2 or upgraded versions. Degree in Computer Science and Engineer: App Developer and has multiple Programming languages experience. Enthusiasm for technology & like learning technical.
OPCFW_CODE
Chapter 1: What is Contemporary Musical Theatre Recommended Reading: Musical Theatre History Knapp, Raymond. 2006. The American Musical and the Formation of National Identity. Princeton: Princeton University Press. Leve, James. 2015. American Musical Theater. New York: Oxford University Press. McMillin, Scott. 2006. The Musical as Drama. Princeton, NJ: Princeton University Press. Stempel, Larry. 2010. Showtime: A History of the Broadway Musical Theatre. New York: W.W. Norton & Company, Inc. Wolf, Stacy. 2020. Beyond Broadway: The Pleasure and Promise of Musical Theatre Across America. New York: Oxford University Press. Recommended Reading: Different Kinds of Musicals Barrios, Richard. 2014. Dangerous Rhythm: Why Movie Musicals Matter. New York: Oxford University Press. Flinn, Denny Martin. 2008. The Great American Book Musical: A Manifesto, A Monograph, A Manual. New York: Limelight Editions. Robert Gordon and Olaf Jubin, eds. 2016. The Oxford Handbook of the British Musical. New York: Oxford University Press. Locke, Charley. 2017. “Musicals (Yes, Musicals) Are About to Shake Up Podcasting.” Wired, July 14, 2017.https://www.wired.com/story/36-questions-musical-podcast/ Powers, Brandon. 2021. “The Future of Musical Theatre is on TikTok.” Medium, January 19, 2021.https://brandonpowers.medium.com/the-future-of-musical-theatre-is-on-tiktok-8ecb7ef660a1 Hahn, Don, director. 2018. Howard. Disney+.https://disneyplusoriginals.disney.com/movie/howard Recommended Reading: Further Integrating Movement into the Musical Cramer, Lyn. Creating Musical Theatre: Conversations with Broadway Directors and Choreographers. London: Bloomsbury Methuen Drama. Hill, Constance Valis. 2015. Tap Dancing America: A Cultural History. New York: Oxford University Press. Jowitt, Deborah. 2005. Jerome Robbins: His Life, His Theater, His Dance. New York: Simon & Schuster. Winkler, Kevin. 2018. Big Deal: Bob Fosse and Dance in the American Musical. New York: Oxford University Press. Recommended Reading: How Musicals are Developed and Produced Breglio, John. 2016. I Wanna be a Producer: How to Make a Killing on Broadway… or Get Killed. Milwaukee: Applause Theatre & Cinema Books. Hoffman, Warren. 2020. The Great White Way: Race and the Broadway Musical. New Brunswick: Rutgers University Press. MacDonald, Laura and Everett, William A. eds. 2017. The Palgrave Handbook of Musical Theatre Producers. New York: Palgrave Macmillian. Osatinksi, Amy S. 2019. Disney Theatrical Productions: Producing Broadway Musicals the Disney Way. New York: Routledge. Recommended Reading: Globalization of Musical Theatre Kim, Shin Dong. 2015. “The Industrialization and Globalization of China’s Musical Theater,” 12-17. Media Industries Journal (1.3, 2015). Tanaka, Rina. 2017. “Musicals in Post-Globalization: the Case of “Ever-Growing” Musicals from Vienna via Japan.” ReVisions, March 14, 2017.https://revisions.pubpub.org/pub/musicals-in-post-globalization/release/1 Taylor, Mille, and Symonds, Dominic. 2014. Studying Musical Theatre: Theory and Practice. London: Palgrave.
OPCFW_CODE
Weather Perl Program UPDATE 280135ZAUG2009: made some massive improvements! So I found some inspiration to start coding today and ended up writing a pretty cool little Perl program that takes a latitude/longitude coordinate, goes through a large list of ZIP codes and finds the ZIP code nearest the provided grid coordinate. Then it uses the Weather::Google Perl module to find the current conditions for that grid. - Make it faster. Right now it loops through every single ZIP code no matter how far away they are. What I'm thinking is I will partition the US into chunks and do a quick guess of what region you fall into, then subpartition that as well. If I split the US into 10 chunks, then split those into 10 more subchunks, you should see a performance gain of almost 10^2. - Make it an AJAX web application. I keep looking at AJAX webapps and thinking how cool they look, then I try to follow the W3Schools manual and give up. Can it really be that hard, or am I just a born quitter? In any case, I want this to be a cool webapp. - Support MGRS/UTM coordiates. This means conversions, which shouldnt' be too hard, which means some loss of precision. The database I'm using isn't perfect anyways so it'll have to suffice. - As indicated at the top of the page, I did indeed make this idea into a webapp. - MGRS and UTM are now supported through the use of Geo::Coordinates::UTM. users simply won't have input validation, but the webapp will be otherwise fully available for them. - I spent a lot of time working on optimization. My original plan for an algorithm seemed beautiful in my head, but it just wasn't working the way I wanted in practice so I ended up breaking the database into chunks by state. The master database contains a list of each state with the average latitude and longitude for ZIP codes in that state. If this algorithm sounds like a hackjob to you - it is. But it seems to be working so far. Please let me know if you find a way to trick it. - The webapp removes support for Weather::Google. I know, I liked Weather::Google too, but I ran into some problems and got to thinking. That thinking made me realize that instead of pulling weather data for the user I could instead provide some links to weather websites. I only have Weather.com, NOAA, AccuWeather, and Google being generated right now. If there's a good weather website you think should be listed let me know. Original Version: Download. I use Strawberry Perl on my Windows computer. You'll probably need to use CPAN to install Weather::Google and Second Version: Download. This one requires Geo::Coordinates::UTM and maintains the dependencies from the previous program. This distribution also includes new, smaller databases and the Perl program I wrote to generate those small databases. Chris Michels page on latitude/longitude distance calculation was helpful in creating this program.
OPCFW_CODE
We have great expertise in supporting companies and candidates in their social recruiting and talent hunting journey, alleviating the “skill gap” issue. Our capabilities and our problem-solving approach are proven by the appreciation of so many customers. Let’s have a talk! On this subject, here is an article about AI & recruiting: AI is making big strides into recruiting technology, with startups and established vendors investing heavily in products for a range of uses including interview scheduling, sourcing, and assessments. While it’s certain that AI will disrupt recruiting by automating a lot of what recruiters do, the technology is barely beyond infancy. This limits what it can deliver — at least for now. Most products that use AI have one defining characteristic — they can do one narrowly defined task only, using specific data to produce a response. For example, which candidate is likely to respond to a solicitation for a job? Or, which applicant is likely to be a high performer? Even products that perform tasks which may appear less well defined take this approach. Interview scheduling means finding a match between the interviewer’s and candidate’s availability. It’s one task that’s completed based on interpreting natural language responses. When trained using large amounts of data, the algorithms get better at completing the task or producing the response. More appropriately, what AI is doing in most cases is pattern recognition. That is, identifying trends, commonalities, and traits in data that are beyond the capability of any individual or even group to do. An example is the facial recognition technology developed by ZIFF. The product can capture data on subtle behaviors like eye movement, smiling, and other expressions and uses it to predict how well a candidate would perform in a job. What AI Cannot Do But AI has its limitations, stemming from the fact that any AI product is not “intelligent.” It may be able to perform a task better than any human, but it has no context for what it is doing. AI products cannot make value judgements. An example of this was Microsoft’s AI based chatbot that started using foul and extremely racist language. This was not because of anything the developers did, but because the chatbot was being trained on language used by those it interacted with. Since AI products are trained rather than programmed, the training data set will influence what they produce … much like a child imitating the behavior of its parents without knowing right from wrong. It’s too early to know if any technology employed for recruiting produces biased results, but it may well happen if its trained on a dataset that reflected past biases. There are examples from other fields such as a product that predicts future criminal behavior and thereby influences sentencing. An evaluation of outcomes suggests a propensity for bias, which has real-world consequences for those whose lives are affected as a result. The same could be true for any AI-driven product that predicts performance on the job if the training data set is not carefully chosen. There are also more mundane issues relating to AI. An interview-scheduling application may not recognize that a highly qualified candidate who expresses urgency in finding a time to interview may be receiving other offers and should be prioritized above others. The application has no way of knowing if a candidate is more qualified than others and should be shown preference. It’s not because of a lack of programming, but because it lacks the overall context for the interview. Another limitation for AI systems is the explainability problem. The technology can be a black box; in other words, one where it is hard to explain how a certain decision was reached. A product may produce good results, but a lack of explainability limits its credibility in the eyes of those who rely on it for making decisions. Having to take it on faith that the product works does not inspire confidence. New laws, such as the European Union’s GDPR, could stop the use of any AI product for recruiting where this problem exists. The law specifically includes a right to explanation. Individuals have the right to be given an explanation for decisions that significantly affect them. So a person denied a job because the process included recommendations from AI could insist on an explanation of how the algorithm produced the decision. AI models cannot transfer their learning. That is, use their experiences from one set of circumstances to work in another. Unlike a recruiter who can use the skills and experience developed in filling jobs in one industry or field to filling jobs in others, whatever an AI model has learned for a given task remains applicable to that specific task only. To adapt the model to work on something even slightly different means training it all over again. A model that predicts which chemical engineers perform well in the energy industry will be useless in predicting which perform well in the food industry. The Future of AI What is true today for AI may not hold tomorrow. [to continue, click HERE]
OPCFW_CODE
TeXStudio: Add new word to dictionary via button I installed a new Dutch dictionary from the Open Office website in my 'dictionaries' file. Spelling checks work perfectly in TeXStudio, but now, I'd like to add new words to the dictionary during the spelling check. I've already found this: TeXworks: How to add a word to the spell checker dictionary?. However, that's not exactly what I want. Instead of editing the .dic and .aff. files manually, I'd like to provide a button, so that the new word is added to the dictionary automatically. Edit: What about adding a link to the spelling check box that opens the .dic and .aff files? Is that possible? I just want to avoid a long search for the files every time. How can I do that? Hi and Welcome to TeX.SX! I hope I got it right that you speak about TeXworks, so I added the appropriate information to the question ;) If it's not correct, you can hit the edit button and revert the changes. As to your question: I don't think it is possible in the current version of TeXworks. Indeed, I'm working with the latest version of TeXStudio. Since spelling checking is not a built in function of TeXworks or TeXstudio, I don't think it is possible, because TeXworks or/and TeXstudio search .dic and .aff files as database and check if there is a typo. What do you mean by "add a link to the files"? Do you mean you want to put links, pointing to other files, in your PDF output? Hunspell dictionaries (.dic and .aff) are more complex than just word lists (i.e. it contains information of possible affixes). Therefore, a proper entry cannot be generated from a single word. The solution in TexStudio is to store additional words in an ignore word list (.ign) next to the dictionary. This list can be populated via Context Menu -> always ignore or via the Always Ingore button in the spell checking dialog. Note: On Windows 7 the default dictionaries are in C:\Program Files (x86)\TeXstudio\dictionaries\ Windows 7 prevents writing to the program files directory.It redirects the writes to User dictionaries, which are in C:\Users\<User>\AppData\Local\VirtualStore\Program Files (x86)\TeXstudio\dictionaries You may find the .ign files there. On a Mac it looks like the added words are in /Applications/texstudio.app/Contents/Resources. This means that words added by the user are put in /Applications, which I would have thought is very non-standard. Also, does this mean that if I have to reinstall TeXstudio for some reason all of the words I have added will be lost? Why aren't the added words in a file in ./config/texstudio, and could that be configured? I wonder the same thing about the configuration...
STACK_EXCHANGE
[Bug]: Lack of Input Validation for txHash Parameter What happened? Lack of Input Validation for txHash Parameter Vulnerable Code https://github.com/etherisc/flightdelay-ui/blob/develop/src/app/api/purchase/[tx]/route.ts The vulnerability arises from the direct use of the txHash parameter from the request without validation. Here is the relevant line of code: export async function GET(request: NextRequest, { params } : { params: { tx: string } }) { const reqId = nanoid(); const txHash = params.tx; // No validation performed on txHash const signer = await getBackendVoidSigner(); // fetch rating data from flightstats const { policyNftId, riskId } = await checkPolicyCreated(reqId, txHash, signer); if (policyNftId === BigInt(0)) { return Response.json({ error: 'Policy not created yet' }, { status: 202 }); } return Response.json({ policyNftId, riskId }, { status: 200 }); } Description. The GET function is responsible for checking the status of a transaction related to policy creation on a blockchain. It accepts a txHash parameter from the request, which is expected to be a valid Ethereum transaction hash. However, the code does not perform any validation on this parameter before using it. This lack of validation can lead to several security and operational issues. Impact. Malformed Requests: Without validation, the function may accept malformed transaction hashes, leading to errors when interacting with the blockchain. This can cause the application to crash or behave unpredictably. Denial of Service (DoS): An attacker could send a large number of malformed or invalid transaction hashes to overload the system, potentially leading to a denial of service for legitimate users. Security Risks: While traditional injection attacks are unlikely with transaction hashes, accepting unvalidated input can still pose risks, especially if the input is used in other contexts where injection might be possible. Data Integrity: Using invalid transaction hashes could lead to incorrect or misleading data being processed or returned, affecting the integrity of the application's operations. Severity. Critical: The severity is critical due to the potential for application crashes, denial of service, and data integrity issues. The lack of validation exposes the application to a range of potential exploits that could disrupt service and compromise reliability. Proof of Concept (PoC). Setup: Deploy the application and ensure it is accessible via a network request. Execution: Send a request to the GET endpoint with an invalid or malformed txHash. curl -X GET "http://your-server/api/endpoint?tx=invalidTxHash" Observation: Observe the server logs and application behavior for errors or crashes resulting from the invalid input.Suggested Fix Implement Input Validation: Validate the txHash parameter to ensure it matches the expected format and length for a transaction hash (typically a 66-character hexadecimal string starting with "0x"). function isValidTxHash(txHash) { return /^0x([A-Fa-f0-9]{64})$/.test(txHash); } export async function GET(request: NextRequest, { params } : { params: { tx: string } }) { const reqId = nanoid(); const txHash = params.tx; if (!isValidTxHash(txHash)) { return Response.json({ error: 'Invalid transaction hash format' }, { status: 400 }); } const signer = await getBackendVoidSigner(); const { policyNftId, riskId } = await checkPolicyCreated(reqId, txHash, signer); if (policyNftId === BigInt(0)) { return Response.json({ error: 'Policy not created yet' }, { status: 202 }); } return Response.json({ policyNftId, riskId }, { status: 200 }); } Error Handling: Provide meaningful error messages to users when validation fails, helping them understand and correct the issue. This can improve user experience and reduce frustration. Logging and Monitoring: Implement logging of invalid transaction hash attempts to monitor potential abuse or attack patterns. This can help in identifying and mitigating denial of service attacks. What operating system are you seeing the problem on? Other (please mention in the description) What browsers are you seeing the problem on? Other (please mention in the description) What wallet extension are you using? Other (please mention in the description) Rules [X] I agree to follow this project's rules The worst that will happen if you call it with an invalid txHash parameter is that the api returns an error instead of the message that the tx was not found. There is no way to change state or anything through the use of this api, so no harm can be done apart from returning the error. Also the internal API part is not intended to be used on its own. If you do, we do not provide any guarantees for things to work as expected.
GITHUB_ARCHIVE
c8y-configuration-plugin panics when required folder/s are missing Describe the bug The c8y-configuration-plugin panics if the plugin is started and the /etc/tedge/c8y folder does not exist. To Reproduce The panic can be reproduced by running the following commands (assuming the c8y-configuration-plugin service has already been stopped before hand) rm -Rf /etc/tedge/c8y c8y-configuration-plugin Expected behavior The following behaviour is expected from the c8y-configuration-plugin executable: It should not panic (ideally the process should not exit, but this depends on the design. If it does exit, then it should exit with a non-zero exit code and not panic) An error or warning message should be printed indicating which folder does not exist (to help the user with troubleshooting) Screenshots The console output showing the error is below: The system config file '/etc/tedge/system.toml' doesn't exist. Use '/bin/systemctl' as a service manager. 2023-05-08T09:13:35.074063799Z INFO Runtime: Started 2023-05-08T09:13:35.075157049Z INFO c8y_config_manager::plugin_config: Reading the config file from /etc/tedge/c8y/c8y-configuration-plugin.toml 2023-05-08T09:13:35.075176549Z ERROR c8y_config_manager::plugin_config: The config file /etc/tedge/c8y/c8y-configuration-plugin.toml does not exist or is not readable. No such file or directory (os error 2) 2023-05-08T09:13:35.075357757Z INFO Runtime: Running Signal-Handler-0 2023-05-08T09:13:35.07536734Z INFO Runtime: Running MQTT-1 2023-05-08T09:13:35.075370757Z INFO Runtime: Running C8YJwtRetriever-2 2023-05-08T09:13:35.075373757Z INFO Runtime: Running HTTP-3 2023-05-08T09:13:35.07538109Z INFO Runtime: Running C8Y-REST-4 2023-05-08T09:13:35.075384549Z INFO Runtime: Running FsWatcher-5 2023-05-08T09:13:35.07538759Z INFO Runtime: Running ConfigManager-6 2023-05-08T09:13:35.075390465Z INFO Runtime: Running Timer-7 2023-05-08T09:13:35.075394007Z INFO Runtime: Running HealthMonitorActor-8 2023-05-08T09:13:35.075402299Z INFO c8y-configuration-plugin: send Message { topic: Topic { name: "tedge/health/c8y-configuration-plugin" }, payload: "{"pid":1276,"status":"up","time":1683537215}", qos: AtLeastOnce, retain: true } 2023-05-08T09:13:35.07559009Z INFO C8Y-REST: start initialisation 2023-05-08T09:13:35.07559959Z INFO C8Y-REST => JWT: send () 2023-05-08T09:13:35.075604507Z INFO C8YJwtRetriever: recv Some((0, ())) thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: FromNotifyError(Error { kind: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" }), paths: [] })', crates/extensions/tedge_file_system_ext/src/lib.rs:126:47 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Aborted Environment (please complete the following information): OS [incl. version]: alpine 3.17 Hardware [incl. revision]: docker engine System-Architecture [e.g. result of "uname -a"]: aarch64 thin-edge.io version [e.g. 0.1.0]: 0.10.0-209-gb9a17aba Additional context QA has thoroughly checked the feature and here are the results: [ ] Test for ticket exists in the test suite. [x] QA has tested the feature and it meets the required specifications. Test steps: sudo systemctl stop c8y-configuration-plugin.service sudo rm -Rf /etc/tedge/c8y c8y-configuration-plugin Result: The system config file '/etc/tedge/system.toml' doesn't exist. Use '/bin/systemctl' as a service manager. 2023-06-02T09:43:03.015538048Z INFO Runtime: Started 2023-06-02T09:43:03.022397852Z INFO c8y_config_manager::plugin_config: Reading the config file from /etc/tedge/c8y/c8y-configuration-plugin.toml 2023-06-02T09:43:03.02247561Z ERROR c8y_config_manager::plugin_config: The config file /etc/tedge/c8y/c8y-configuration-plugin.toml does not exist or is not readable. No such file or directory (os error 2) Error: Directory: "/etc/tedge/c8y" not found
GITHUB_ARCHIVE
import numpy as np from joblib import Parallel, delayed from pathlib import Path from mushroom_rl.algorithms.value import QLearning, DoubleQLearning,\ WeightedQLearning, SpeedyQLearning from mushroom_rl.core import Core from mushroom_rl.environments import * from mushroom_rl.policy import EpsGreedy from mushroom_rl.utils.callbacks import CollectQ from mushroom_rl.utils.parameters import Parameter, ExponentialParameter """ Simple script to solve a double chain with Q-Learning and some of its variants. The considered double chain is the one presented in: "Relative Entropy Policy Search". Peters J. et al.. 2010. """ def experiment(algorithm_class, exp): np.random.seed() # MDP path = Path(__file__).resolve().parent / 'chain_structure' p = np.load(path / 'p.npy') rew = np.load(path / 'rew.npy') mdp = FiniteMDP(p, rew, gamma=.9) # Policy epsilon = Parameter(value=1.) pi = EpsGreedy(epsilon=epsilon) # Agent learning_rate = ExponentialParameter(value=1., exp=exp, size=mdp.info.size) algorithm_params = dict(learning_rate=learning_rate) agent = algorithm_class(mdp.info, pi, **algorithm_params) # Algorithm collect_Q = CollectQ(agent.Q) callbacks = [collect_Q] core = Core(agent, mdp, callbacks) # Train core.learn(n_steps=20000, n_steps_per_fit=1, quiet=True) Qs = collect_Q.get() return Qs if __name__ == '__main__': n_experiment = 500 names = {1: '1', .51: '51', QLearning: 'Q', DoubleQLearning: 'DQ', WeightedQLearning: 'WQ', SpeedyQLearning: 'SPQ'} log_path = Path(__file__).resolve().parent / 'logs' log_path.mkdir(parents=True, exist_ok=True) for e in [1, .51]: for a in [QLearning, DoubleQLearning, WeightedQLearning, SpeedyQLearning]: out = Parallel(n_jobs=-1)( delayed(experiment)(a, e) for _ in range(n_experiment)) Qs = np.array([o for o in out]) Qs = np.mean(Qs, 0) filename = names[a] + names[e] + '.npy' np.save(log_path / filename, Qs[:, 0, 0])
STACK_EDU
Managing your SD card The SD card is a vital part of the Bela system. It’s often necessary to copy (or “flash”) the Bela software from your computer to an SD card for use in your Bela system, or to back up your projects or the entire contents of your SD card. This article is about managing the data on your SD card - flashing it (using free GUI software or the command line) - as well as backing up both your projects as well as your whole Bela software image. If you bought a Bela Starter Kit, your Bela unit's internal memory will be pre-flashed with the Bela software. Bela Mini Starter Kits include a micro SD card flashed with the Bela software. Table of contents - Requirements of the SD card - Flash an SD card using Balena Etcher - Flash an SD card using the command line - Backing up your SD card - Flashing the eMMC - Having trouble? Requirements of the SD card All Bela systems use micro SD cards. The Bela Mini Starter Kit comes with an SD card pre-flashed with the latest software. You can also purchase a pre-flashed SD card from our shop, or add one to any order. If you want to use an SD card of your own, make sure it has at least 8GB of space. We recommend using a 16GB card to ensure lots of space for assets like audio clips. We recommend using a high-quality SD card. Your SD card is the core of your system, and if it fails you risk losing everything that's not backed up - make sure your SD card is reliable! Flash an SD card using Balena Etcher 1. Download Balena Etcher Balena Etcher is free, open-source software used for copying image files ( .img) and zipped files ( .zip) to storage media. 2. Download the latest Bela software image. You need to download the Bela software image to flash to an SD card. Note that this is different from the ZIP file used for updating your Bela system. Download the latest Bela image release from our Github page. 3. Insert your SD card into your computer Insert your SD card into your laptop using the SD card slot (you may need an SD card adapter), or use a dongle. 4. Flash the card using Balena Etcher Open Balena Etcher. You’ll see the following: Make sure you choose the SD card in Balena Etcher! This process will overwrite the selected device, so double check the device selected. Select Image, and select the Bela software .img.xz file you downloaded in the previous step (you don’t need to uncompress it): Select Target, and select SD card you inserted into your computer: Flash to start the process. A progress bar will appear: 5. Insert the SD card into your Bela system When your SD card is finished, insert it into your Bela system. Plug your system in to your computer by USB and load up the IDE. Flash an SD card using the command line If you’re comfortable using the command line, you can use this method. No extra software required! 1. Download the latest Bela software image Download the latest Bela image release from our Github page. The downloaded file will be about 550MB. Though updating your Bela system requires a ZIP file, you need to download the Bela software image to flash to an SD card. 2. Uncompress the Bela software image When your download is complete, uncompress it. The uncompressed file will be about 4GB. Various GUI tools for uncompressing xz files are available for Linux, Mac and Windows, but you can also use the command line from a bash shell: $ unxz -k bela_version_date.img.xz 3. Insert your SD card into your computer You can use the SD card slot on your computer with an SD card adapter, or an external USB SD card reader. 4. Find the name of your SD card by listing your volumes This step is crucial! The flashing process re-writes the device you choose, so make sure you double check the name and that you're sure you've specified your SD card. On Linux, run the following command: $ sudo fdisk -l On Mac OS X, run: $ diskutil list This will list your volumes. At least one will be your computer’s hard drive, and one will be your SD card. If you’re not sure which is your SD card, eject it and list your volumes again - the volume that’s now not listed will be your SD card. NOTE: on MacOS, if your SD card is, e.g.: in /dev/disk3, you will be able to write to and read from the corresponding “raw” disk /dev/rdisk3 (note the additional r). This will make the copy faster. 5. Unmount your SD card partition We assume for the rest of this document that your SD card is $ sudo umount /dev/mydisk $ sudo diskutil unmountDisk /dev/mydisk 6. Write the image to the SD card. Run the following command. (On a Mac, add an r to your disk name to increase writing speed, ie $ sudo dd if=/path/to/inputFile.img of=/dev/mydisk bs=1024k There is usually no command line output during this process, and it may take several minutes. To display the process, enter CTRL + T. 7. Verify the image dd command is finished, eject and re-insert the SD card into your computer. List your volumes as you did in Step 4. If your SD card has flashed correctly, it will be listed as an external driver called 8. Insert your SD card into your Bela system Your SD card is ready! Insert it into your Bela or Bela Mini system, and you’re ready to go. Backing up your SD card If you simply want to download all projects and assets from your Bela system, go to Settings , scroll down to Other System Functions, and click Download all projects. Your entire project directory will automatically download in a single ZIP file. However, you might want to back up your entire Bela system as well as your project files (to use the SD card in another Bela system, for example, or as an emergency backup for a performance). This process creates a Bela software image of what’s on your Bela system, which can then be flashed to an SD card. 1. Remove the SD card from your Bela system and boot it up For this process your Bela system must boot from its internal memory. To do this, remove your SD card, then connect your board to your computer and boot it up. ssh into your Bela system Open a terminal on your computer, and connect to your Bela system by ssh by running: $ ssh firstname.lastname@example.org Now you’re connected to your Bela system. Your command line will look like this: This means you’ve connected to your Bela system, and commands run here will execute on your Bela system and not on your local computer. Bela is based on Linux, so all Bela commands are based on the Linux command line. 2. Download your Bela image Your Bela software image file will be large - the Bela software is 4GB, plus any space required for your projects and assets. Run the following on your Bela command line: $ sudo dd conv=sparse if=/dev/mydisk of=outputFile.img 3. Flash your image to an SD card Now that you have a .img file that contains your entire Bela system and projects, you can flash it to an SD card using the instructions on this page. Flashing the eMMC This section applies only to Bela, and not to Bela Mini. Bela Mini requires an SD card and does not have internal memory. Bela uses a BeagleBone Black, which has a built-in eMMC. This means you can copy the Bela software to this eMMC using a flashed SD card. You then won’t need the SD card to boot up. To flash the Bela eMMC, follow these steps: 1. Flash the Bela software to an SD card 2. Insert the flashed SD card into your Bela and boot it up Insert the flashed SD card, and boot up the Bela system. It will automatically boot from the SD card. 3. Access the browser-based IDE In a web browser (we recommend Chrome) access the Bela IDE (you can use bela.local, or use the IP for your OS). 4. Find the version of the Bela software you’re running In the console at the bottom of the IDE, type this command and press grep "Bela image" /etc/motd The version of the Bela software on your SD card will be printed to the console. 5. Copy the image from the SD card to the eMMC If you are on v0.2.1 or earlier, run this line in the console: If you are on v0.3.0 or later, run: The copy process will take a few minutes. 6. Verify it This tests if your image has been properly copied to the eMMC. Shut down your Bela and remove the SD card. Then plug it back into your computer, and wait for it to boot up. Load up the IDE in a web browser - if it loads, you’ve successfully flashed your Bela system’s internal memory.
OPCFW_CODE
In Agile development, team members work together to test stories and ensure high quality. However, when the stories from different teams are assembled, there is often a lack of clarity around who is responsible for testing how well they integrate. Processes around integration testing can be a point of confusion for Agile teams. Continuous integration (CI), which is the process of running regression tests with each build, can help, but will not solve all your integration test needs. Agile expert Janet Gregory discusses the challenges with integration testing and explains the practice of continuous integration. The challenge with integration testing At the Agile 2013 conference in August, Janet Gregory, co-author with Lisa Crispin of the popular book Agile Testing: A Practical Guide for Agile Testers and Teams, hosted an interactive session that gave participants an opportunity to discuss challenges they were encountering. One such challenge had to do with integration testing when multiple teams are developing code. Though there is a "Scrum of Scrum" meeting to discuss dependencies, there are no stories or tasks assigned and so it's unclear who is responsible for the integration testing that must happen to ensure the code deployed by different teams is working well together. Gregory answers this way: This is a common problem when there is more than one team. Lisa [Crispin] and I call this issue 'forgetting the big picture.' I wrote a blog post on this a while ago. There are a couple of things that I suggest to try: 1. Write acceptance tests at the feature level (i.e., a feature is some business capability that makes sense to release; a feature has many stories). Ideally, this feature is contained within one team, but not always. If it is across teams, then either one or both teams take responsibility for making sure the feature is 'done.' I also recommend defining 'feature done,' as well as 'story done.' 2. Sometimes it is necessary to have an integration team -- not my favorite, but I've seen it successfully work. For example, at the end of every iteration or sprint, the project teams release their integrated, potentially shippable product to the integration team. This group usually has a more substantial test environment and can test the post-development activities like browser compatibility or interoperability or load, performance, etc. 3. To make either of the first two items work, someone has to be part of the 'Scrum of Scrums' or full product-release team to be aware of the dependencies. What about continuous integration? Martin Fowler, one of the Agile Manifesto signatories, describes the core practice of CI as follows: Continuous integration is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily -- leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Sometimes teams will feel that if they have implemented a CI environment, "integration testing" is essentially covered. There is no need to worry about it anymore. While certainly CI provides a means for regression testing with every build, it does not necessarily test all integration points or provide a thorough integration test, unless automated tests have been written with a thorough integration test in mind. In a series of articles by Howard Deiner, CI is explained, step by step. In the first of these tips, "Continuous integration: Quality from the start with automated regression," Deiner explains that CI is a way to integrate automated regression tests into your build. These are typically automated unit tests that ensure that the development code is still operating as intended. If suddenly one of these tests fails, it might be because a different team implemented some dependent code that violated an agreed-upon requirement. In this way, CI does provide some level of testing and validation of code that is written by different teams. Certainly CI is a positive step in the ability to find integration issues early. However, it is only as good as the automated tests that are being executed. Just because automated regression tests are being executed, we cannot be assured of the quality of those tests and whether or not they will catch all integration errors. By combining the wisdom from Gregory of creating feature-level integration tests with that from Fowler and Deiner, and having those be automated and be part of a CI build, we can have the best of both worlds. CI provides a way to continually test the code. Teams need to ensure that those automated tests are of high quality and will catch errors that may occur beyond isolated unit tests.
OPCFW_CODE
Joey's refrigerator breaks down, so he tries to manipulate his friends into paying for part of it. Plus he has to eat all those perishables by himself. Rachel is looking for a date to a charity ball; Phoebe finds a guy for her, but so do Monica and Chandler; things get competitive. Ross and Elizabeth continue their "secret" romance. When Elizabeth announces she's going away for Spring Break, Ross misunderstands and thinks she's asking him to go with her. When he finds out she's going away without him, he starts to worry about how much "partying" she might be planning. Monica: Well, uh, you know, our guy works with Chandler and he's really nice, and smart, and he's a great dresser! Phoebe: Have you seen your guy's body? Chandler: No, our guy is just a floating head. Phoebe: My guy is well read. Chandler: Our guy has great hair. Phoebe: My guy has great teeth! Chandler: Our guy smells incredible. Monica (to Chandler): Do you want our guy to be your guy? Ross: You don't understand! Elizabeth was about to ask me to go on a trip with her! Is that taking it slow? No, I'm not ready for this! Okay? What... what do I tell her? Chandler: Just tell her the truth! Tell her you're not ready. Ross: I can do that. Oh, oh... what if she gets upset? Chandler: Then you distract her with a Barbie doll. Chandler: Are you funny? Tell us a joke! Sebastian: Look, I just wanted to have coffee with Rachel. Phoebe: Well, so do a lot of people. Sebastian: Actually, I, uh... I gotta get going. Give me a call sometime. Rachel: Oh, but, you know, you didn't give me your phone number. Sebastian: Okay! See you later! Chandler: Turns out he is kinda funny. Ross: Then what am I supposed to do? Phoebe: Nothing, you just have to be cool with it. Ross: Well, what is she goes down and... and sleeps with a bunch of guys? Chandler: Well, maybe you don't marry this one. Ross: Anyway, um, I... I am worried about that bathing suit, not because it's revealing (which I'm fine with), no I'm concerned about your health. Sun exposure. Elizabeth: Oh, don't worry. I have plenty of sun block. It's SPF-30. Ross: Well, if what's in the bottle is actually 30. I mean, sometimes you get 30, sometimes you get 4, and I swear to God, more often than not, it's just milk.
OPCFW_CODE
Why linear mixed-effects models are probably not the solution to your missing data problems Linear mixed-effects models are often used for their ability to handle missing data using maximum likelihood estimation. In this post I will present a simple example of when the LMM fails, and illustrate two MNAR sensitivity analyses: the pattern-mixture method and the joint model (shared parameter model). This post is based on a small example from my PhD thesis. D. B. Rubin (1976) presented three types of missing data mechanisms: missing completely at random (MCAR), missing at random (MAR), missing not at random (MNAR). LMMs provide unbiased estimates under MAR missingness. If we have the complete outcome variable (which is made up of the observed data and the missing values ) and a missing data indicator (D. B. Rubin 1976; R. J. Little and Rubin 2014; Schafer and Graham 2002), then we can write the MCAR and MAR mechanisms as, If the missingness depends on , the missing values in , then the mechanism is MNAR. MCAR and MAR are called ignorable because the precise model describing the missing data process is not needed. In theory, valid inference under MNAR missingness requires specifying a joint distribution for both the data and the missingness mechanisms (R. J. A. Little 1995). There are no ways to test if the missing data are MAR or MNAR (Molenberghs et al. 2008; Rhoads 2012), and it is therefore recommended to perform sensitivity analyses using different MNAR mechanisms (Schafer and Graham 2002; R. J. A. Little 1995; Hedeker and Gibbons 1997). LMMs are frequently used by researchers to try to deal with missing data problems. However, researchers frequently misunderstand the MAR assumption and often fail to build a model that would make the assumption more plausible. Sometimes you even see researchers using tests, e.g., Little’s MCAR test, to prove that the missing data mechanisms is either MCAR or MAR and hence ignorable—which is clearly a misunderstanding and builds on faulty logic. A common problem is that researchers do not include covariates that potentially predict dropout. Thus, it is assumed that missingness only depend on the previously observed values of the outcome. This is quite a strong assumption. A related misunderstanding, is that the LMM’s missing data assumption is more liberal as it allows for participants’ slopes to vary. It is sometimes assumed tat if a random slope is included in the model it can also be used to satisfy the MAR assumption. Clearly, it would be very practical if the inclusion of random slopes would allow missingness to depend on patients’ latent change over time. Because it is probably true that some participants’ dropout is related to their symptom’s rate of change over time. Unfortunately, the random effects are latent variables and not observed variables—hence, such a missingness mechanism would also be MNAR (R. J. A. Little 1995). The figure below illustrates the MAR, outcome-based MNAR, and random coefficient-based MNAR mechanisms. To illustrate these concepts let’s generate data from a two-level LMM with random intercept and slopes, and included a MNAR missing data mechanism where the likelihood of dropping out depended on the patient-specific random slopes. Moreover, let’s assume that the missingness differs between the treatment and control group. This isn’t that unlikely in unblinded studies (e.g., wait-list controls). The equations for the dropout can be written as, The R code is quite simple, Now let’s draw a large sample from this model (1000 participants per group), and fit a typical longitudinal LMM using both the complete outcome variable and the incomplete (MNAR) outcome variable. Here are the results (click on “SHOW” to see the output). We can see that the slope difference is -0.25 for the complete data and much larger for the LMM with missing data (-1.14). A simple extension of the classical LMM is a pattern-mixture model. This is a simple model where we allow the slope to differ within subgroups of different dropout patterns. The simplest pattern is to group the participants into two subgroups dropouts (1) or completers (0), and include this dummy variable in the model. As you can see in the output, we now have a bunch of new coefficients. In order to get the marginal treatment effect we need to average over the dropout patterns. There are several ways to do this, we could just calculate a weighted average manually. For example, the outcome at posttest in the control group is To estimate the treatment effect we’d need to repeat this for the treatment group and take the difference. However, we’d also need to calculate the standard errors (e.g., using the delta method). An easier option is to just specify the linear contrast we are interest in. This tells us that the difference between the groups at posttest is estimated to be -4.65. This is considerably smaller than the estimate from the classical LMM, but still larger then for the complete data. We could accomplish to same thing using The pattern-mixture model was an improvement, but it didn’t completely recover the treatment effect under the random slope MNAR model. We can actually fit a model that allows dropout to be related to the participants’ random slopes. To accomplish this we combine a survival model for the dropout process and an LMM for the longitudinal outcome. We can see from the output that the estimate of the treatment effect is really close to the estimate from the complete data (-0.23 vs -0.25). There’s only one small problem with the joint model and that is that we almost never know what the correct model is… Now let’s run a small simulation to show the consequences of this random-slope dependent MNAR scenario. We’ll do a study with 11 time points, 150 participants per group, a variance ratio of 0.02, and pretest ICC = 0.6, with a correlation between intercept and slopes of -0.5. There will be a “small” effect in favor of the treatment of . The following models will be compared: - LMM (MAR): a classical LMM assuming that the dropout was MAR. - GEE: a generalized estimating equation model. - LMM (PM): an LMM using a pattern-mixture approach. Two patterns were used; either “dropout” or “completer”, and the results were averaged over the two patterns. - JM: A joint model that correctly allowed the dropout to be related to the random slopes. - LMM with complete data: an LMM fit to the complete data without any missingness. I will not post all code here; the complete code for this post can be found on GitHub. Here’s a snippet showing the code that was used to fit the models. The table and figure below shows how much the treatment effects differ. We can see that LMMs are badly biased under this missing data scenario; the treatment effect is much larger than it should be (Cohen’s d: -0.7 vs. -0.2). The pattern-mixture approach improves the situation, and the joint model recovers the true effect. Since the sample size is large, the bias under the MAR assumption leads to the LMM’s CIs having extremely bad coverage. Moreover, under the assumption of no treatment effect the MAR LMM’s type I errors are very high (83%), whereas the pattern-mixture and joint model are closer to the nominal levels. |Model||M(Est.)||Rel. bias||d||Power||CI coverage||Type I error| Note: MAR = missing at random; LMM = linear mixed-effects model; GEE = generalized estimating equation; JM = joint model; PM = pattern mixture; Est. = mean of the estimated effects; Rel. bias = relative bias of Est.; d = mean of the Cohen’s d estimates. This example is purposely quite extreme. However, even if the MNAR mechanism would be weaker, the LMM will yield biased estimates of the treatment effect. The assumption that dropout might be related to patients’ unobserved slopes is not unreasonable. However, fitting a joint model is often not feasible as we do not know the true missingness mechanism. I included it just to illustrate what is required to avoid bias under a plausible MNAR mechanism. In reality, the patients’ likelihood of dropping out is likely an inseparable mix of various degrees of MCAR, MAR, and MNAR mechanisms. The only sure way of avoiding bias would be to try to acquire data from all participants—and when that fails, perform sensitivity analyses using reasonable assumptions of the missingness mechanisms. Hedeker, Donald, and Robert D Gibbons. 1997. “Application of Random-Effects Pattern-Mixture Models for Missing Data in Longitudinal Studies.” Psychological Methods 2 (1): 64–78. doi:10.1037/1082-989X.2.1.64. Little, Roderick J. A. 1995. “Modeling the Drop-Out Mechanism in Repeated-Measures Studies.” Journal of the American Statistical Association 90 (431): 1112–21. doi:10.1080/01621459.1995.10476615. Little, Roderick JA, and Donald B Rubin. 2014. Statistical Analysis with Missing Data. Vol. 333. John Wiley & Sons. Molenberghs, Geert, Caroline Beunckens, Cristina Sotto, and Michael G. Kenward. 2008. “Every Missingness Not at Random Model Has a Missingness at Random Counterpart with Equal Fit.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70 (2): 371–88. doi:10.1111/j.1467-9868.2007.00640.x. Rhoads, Christopher H. 2012. “Problems with Tests of the Missingness Mechanism in Quantitative Policy Studies.” Statistics, Politics, and Policy 3 (1). doi:10.1515/2151-7509.1012. Rubin, Donald B. 1976. “Inference and Missing Data.” Biometrika 63 (3): 581–92. doi:10.1093/biomet/63.3.581. Schafer, Joseph L., and John W. Graham. 2002. “Missing Data: Our View of the State of the Art.” Psychological Methods 7 (2): 147–77. doi:10.1037//1082-989X.7.2.147. Published July 09, 2020 (View on GitHub)
OPCFW_CODE
java build path mystery My package builds, as usual, upon several external packages. I want to modify one of the externals, so I go grab its open source. It is in turn built upon further externals, so I get jars for those until all but one dependency is fulfilled: org.codehaus.jackson.JsonParser, called out of the jackson-mapper jar. I take a guess that JsonParser is in the jackson-core.jar (how do you know?) so I add it. My dependency is resolved, and different dependencies suddenly appear for 6 of the source files that previously looked complete. One step forward, 6 steps back. (All this in Eclipse) What am I missing? Maybe not all dependencies are found in one pass? How do you find and resolve dependencies? Thanks! You need to outline how you are doing dependencies. From your description it sounds like you are manually working out what is needed, downloading the jars and installing them into your project. This is perhaps the most complicated, slowest and most painful way of doing things. I would suggest that you look into using the Ivy dependency manager (usually used with the Ant build tool, or the Maven build tool which has an inbuilt dependency manager. A further and more advanced tool (IMHO) is Gradle which uses Ivy behind the scenes and can easily be told to use both Ivy and Maven repositories to source jars from. The advantage of using these tools is that they take care of the dirty work of figuring out the dependencies and downloading the files. They are not a complete solution and you will still have to work out version conflicts and other issues, but they take most of the pain out. yes, I'm doing it manually, ack. I'll look into the tools, tyvm! Oh and there are plugins for the Eclipse and IntelliJ IDEs for all of these tools which make life a lot easier there too. Much simpler question.. I am building this external package to take advantage of a just a very few small changes. Is it possible for me to override what is in a jar file with my own compile of a class file or two I build from the source just for those files? I spose I could update the jar. Yeesh, noobs :) Yes it's possible. You would need to create your own jar and ensure that it is on the class path before the original. However this is not a recommended practice because it's too easy to break. A better solution is to create your own version of the complete API with your changes and use that instead. Alternatively look into whether the API supports you extending it's classes. If so you can do that in your application and just use the original API.
STACK_EXCHANGE
First off, I'm a new-born with AWS (started looking into it two days ago). My client needs a new Drupal 6 module, I have it done, all I need is to upload it and set some things up. My client gave me a username and password for Amazon, so I figured they were using AWS. I can see the Running Instance, and I've followed Amazon documentation to add a new Key Pair and also add my a custom IP rule for SSH access. Problem is, when I try to connect via ssh with a very simple and basic command ssh -i taskey.pem ec2-user@ec-x-x-x-x...amazonaws.com the reponse is Permission denied (publickey). Satus of my environment: - Existing SSH rule for my IP address on the Security Group associated to the running Instance - New Key Pair added to the running instance - key.pem file has 0600 permission - I know it's a Centos machine because when I ping the site's IP part of the response says it is. Hence why I use username ec2-user - Just in case, I've also tried ubuntu and root. Reading around some, it seems that you can't just magically add new Key Pairs to running instances. There is an existing public key for my running instance, but it was created in the past by another worker, and I can't contact them. My client has no repository, hence, as you can imagine, why I'm not just trying loads of things. If I break it, everything gets lost. This answer suggests to delete the old Key Pair (the one I have no .pem file for). But I don't know what the consequences of that might be. Sorry for such noobness but I'm in a rush and have no room to try things. Thanks in advance. I've chosen the "create an AMI..." answer, simply because it's the one I went for. I liked the fact that the old machine could be kept (shut down) and if anything went wrong all I had to do was turn it on again. I up-voted the other possible answer in regards to mounting and unmounting the hard drive, because it's another way of doing it and, in some cases, the only way. Steps followed to achieve SSH ACCESS SUCCESSFULLY: - Stop running instance - Create an AMI from it (right click and choose Create Image) - Once that was created I launched it and gave it the same specifics as the original instance - Supply it with my new key-pair - Repointed my assigned elastic-IP (that's the only service I had, luckily very simple). Went to Elastic IPs, saw the existing one (which no longer had anything assigned to it since the original instance was shut down. Right clicked it and chose Associate Address and chose the new running instance from the created AMI in the Associate with list.) - Checked I had SSH access to it.
OPCFW_CODE
package org.sdase.commons.server.opentracing.tags; import static javax.ws.rs.core.HttpHeaders.AUTHORIZATION; import static javax.ws.rs.core.HttpHeaders.COOKIE; import static javax.ws.rs.core.HttpHeaders.SET_COOKIE; import static org.apache.commons.lang3.StringUtils.join; import io.opentracing.tag.StringTag; import java.util.stream.Collectors; import javax.ws.rs.core.MultivaluedHashMap; import javax.ws.rs.core.MultivaluedMap; public class TagUtils { private TagUtils() { // prevent instances } /** * Please use {@link TagUtils#convertHeadersToString(javax.ws.rs.core.MultivaluedMap)} to generate * values for this tag. */ public static final StringTag HTTP_REQUEST_HEADERS = new StringTag("http.request_headers"); /** * Please use {@link TagUtils#convertHeadersToString(javax.ws.rs.core.MultivaluedMap)} to generate * values for this tag. */ public static final StringTag HTTP_RESPONSE_HEADERS = new StringTag("http.response_headers"); /** * Convert a given {@link MultivaluedMap} with {@link String} keys to the format [key0 = 'value0', * 'value1']; [key1 = 'value2']; ... * * @param headers The {@link MultivaluedMap} with {@link String} keys * @return Formatted {@link String} of header keys and values or {@code null}, if {@code null} was * passed as parameter. */ public static String convertHeadersToString(MultivaluedMap<String, ?> headers) { if (headers != null) { MultivaluedMap<String, ?> sanitizedHeaders = sanitizeHeaders(headers); return sanitizedHeaders.entrySet().stream() .map( entry -> join( "[", entry.getKey(), " = '", entry.getValue().stream() .map(Object::toString) .collect(Collectors.joining("', '")), "']")) .collect(Collectors.joining("; ")); } return null; } public static <T> MultivaluedMap<String, T> sanitizeHeaders(MultivaluedMap<String, T> headers) { MultivaluedMap<String, T> sanitizedHeaders = new MultivaluedHashMap<>(); headers.forEach( (key, value) -> { if (AUTHORIZATION.equalsIgnoreCase(key)) { sanitizedHeaders.put( key, value.stream() .map(h -> (T) sanitizeAuthorizationHeader(h.toString())) .collect(Collectors.toList())); } else if (SET_COOKIE.equalsIgnoreCase(key) || COOKIE.equalsIgnoreCase(key)) { sanitizedHeaders.putSingle(key, (T) "…"); } else { sanitizedHeaders.put(key, value); } }); return sanitizedHeaders; } private static String sanitizeAuthorizationHeader(String header) { if (header.startsWith("Bearer ")) { return "Bearer …"; } else { return "…"; } } }
STACK_EDU
Windows RT tablets can access virtual desktops without additional licenses -- but there's a catch … or three. In presentations, I've asked hundreds of people whether they'll deploy Windows 8 on corporate desktops, and less than five have raised their hand. However, another show of hands shows that more people are interested in the mobility aspect of Windows 8, in particular the Surface tablets. It seems that if Microsoft isn't getting Windows 8 into corporations one way, it will in another. The Surface Pro is Microsoft's least notable new tablet; it runs Windows 8, plain and simple. Its main advantage over its sister product, the Surface RT tablet, is that it can run real Windows applications because it's built on Intel hardware. The Surface RT, on the other hand, is an ARM-based device, running a derivative OS called Windows RT that looks like Windows 8 but comes without the flexibility you would expect of Windows. Since it's an ARM device, it can't run traditional applications or even be managed in a traditional way -- it can't even join a domain! What the Surface RT tablet can do, though, is run Windows 8 apps from the Windows Store (only ones that run in the Metro interface) and connect to virtual desktops. Plus, it can connect to VDI without needing to buy an additional license. Microsoft has essentially bundled its new Companion Device License (CDL) with the Windows RT OS that runs on the tablet. That represents a savings of up to $99 per year when compared to using an iPad or Android tablet to access a virtual desktop, which requires Microsoft's Virtual Desktop Access license or CDL. What's the catch? There are, however, a few caveats to licensing Windows RT tablets. First, this "freebie" only applies if a user's primary computer has Software Assurance (SA). This isn't really all that different from anything else Microsoft does. You need SA to connect to a virtual desktop anyway, and if the machine you're using doesn't have SA, you have to purchase a VDA license. More on Windows RT tablets Quiz: What do you know about Windows 8 and RT? Analyzing whether Windows RT suits the enterprise Explaining Windows RT security When you do that, you're entitled to purchase a CDL that allows you to use up to four additional devices to connect to a virtual desktop. I'm not saying it's the right thing to do, I'm just saying it's not new. Note that this licensing only applies to VDI-connected desktops. Remote Desktop Services (RDS) licensing requirements have not changed, although the price has gone up for per-user licenses as of late last year. If you're accessing RDS remote desktops, you still need to purchase a device or user-based RDS CAL, but you don't need SA or VDA licenses. If your users have multiple devices, it makes sense to purchase the user-based CAL, despite the price increase. The other issue with Windows RT tablets is one I just learned about: The "free" license to access VDI desktops only applies if the Surface RT is purchased and owned by the company. I don't expect too many people got Surface RT tablets for Christmas, let alone brought them to work and asked IT to connect them to their virtual desktop. But if they did, they would be in violation of the license agreement. This caveat is not to be overlooked if you're considering embracing the tablet. When you boil it down, you can indeed find a situation where Surface RT tablets (and other devices running Windows RT) can be used to access virtual desktops without additional licensing. You just have to be accessing a VDI desktop (not RDS) from a device your company purchased for you -- when you're not using the desktop that the company bought for you with SA on it. That sure takes the shine off the "free access" feature, doesn't it?
OPCFW_CODE
Wed, 21 Jan 2009, 10:34:05 EST Created on: Sat, 27 Dec 2008, 20:15:39 EST EGM 6322 Principles of Engineering Analysis 2, Dr. L. Vu-Quoc Partial differential equations (elliptic, parabolic, hyperbolic), exact solution methods, approximate analytical solution methods. The objectives of this course is to develop analytical methods to solve partial differential equations with applications in many areas of engineering (solids, fluids, electromagnetics, heat). Text and other resources: Overview on analytical methods of solving PDEs, caveat for PDEs Nonlinear, quasilinear, linear PDEs. Classification of second-order PDEs. Transformation methods: Reduction to first-order systems, changing variables, curvilinear coordinates, Euler transformation, Kirchhoff transformation, von Mises transformation, Prandtl transformation, hodograph transformation, Legendre transformation. Separation of variables, free-boundary problems Exact methods for both ODEs and PDEs: method of undertermined coefficients, eigenfunction expansions (with advanced application to singularity computation), method of images, integral transforms (for finite and infinite intervals), Exam methods for PDEs: Conformal mappings for 2-D Laplace equation, Poisson formula to solve Laplace equation on a circle (with application in conformal mappings), Duhamel's principle for linear parabolic and hyperbolic PDEs, method of characteristics for hyperbolic PDEs, exact solutions for the wave equation, method of descent for hyperbolic PDEs, hodograph transformation (application in fluid mechanics), Legendre transformation (also application in dynamics and thermodynamics), Approximate analytical methods: Multiple scales, application to singularity computation (fracture mechanics, fluid mechanics, electromagnetics), advanced application of eigenfunction expansions. Tentatively, homework/projects and class participation including bonuses (50%), Exam 1 (25%), Exam 2 (25%). Adjustments to this grade determination and to the weights could be made during the course in consultation with the students. Handbook of Differential Equations, Third Edition, Academic Press, 1998. QA371.Z88 1989, 2 copies, one for in-library use. L. Lapidus and G.F. Pinder, Numerical Solution of Partial Differential Equations in Science and Engineering Q172 .L36 1982 W.E. Boyce and R.C. DiPrima, Elementary Differential Equations and Boundary Value Problems, QA371 .B773 2001 Partial Differential Equations in Mechanics 1: Fundamentals, Laplace's Equation, Diffusion Equation, Wave Equation Partial Differential Equations in Mechanics 2 : The Biharmonic Equation, Poisson's Equation (Vol 2) (Soft Cover) QA805 .S45 2000 Homework / Projects: There will be HW assignments, which are to be solved following Cooperative Learning Techniques. HW should be thought of as mini projects, which include ``hand solution'' with the help of Matlab and the use of the Matlab codes that come with Students will also be asked to develop their own Matlab codes. For a tutorial on how to use Matlab, see for more details. Return to main course web page
OPCFW_CODE
Add serialization to liquid tracking As a user I would like to be able to track the samples #s I placed in my starting deck state Background Serialization number currently is only visible when you're placing liquid into labware in the starting deck state. It does not display in any liquid tracking tooltips. Acceptance criteria [ ] In liquid tracking tooltips liquids with a serialization # should display that #. AFAIK we still haven't answered these questions: How are serialization numbers assigned within a labware, and across labware? How can users change the serialization order within a labware and across labware? What happens when you check "serialize" on a liquid that is already on the deck? If you duplicate a labware that contains serialized liquids, do the serialize numbers increment or duplicate? Can you have multiple instances of a liquid with the same serialization number across labware? What about within a labware? If you un-check "serialize", save that, then re-check it and save that, did it "reset" the serialization order? Those questions seem pretty involved to answer, and to implement! A potential simple & expedient approach for the next iteration could be something like: PD controls serialization order. Its rule is: Liquids are serialized within a labware from top to bottom then left to right. All serialized liquids, in each labware, start at 1. This means that you cannot (for this iteration) serialize across labware. If you have 4 plates with A1 A2 A3 filled with serialized "Sample", it will always be A1: Sample 1, A2: Sample 2, A3: Sample 3 in each labware. If you have multiple ingredients in a labware, each of them starts at 1. So column 1 might look like: Sample 1, Sample 2, Buffer 1, Buffer 2, Sample 3, Sample 4. If we have that limitation in place, we get these answers to the questions: Users can't change the serialization order of initial deck setup Liquids can have the same serialization number across labware, but not within a labware (eg there can be a "Sample 1" in Trough and in 96-Flat, but only 1 "Sample 1" in the Trough, you can't fill multiple wells with "Sample 1") If you duplicate a labware, it'd be the same end result as if you make a new labware and fill the same wells with the same liquid. If you check "Serialize" for an un-serialized liquid already on the deck, PD assigns the serialization numbers following the "top to bottom then left to right, start at 1" Similarly, un-checking and re-checking the "Serialize" checkbox will always end up with your liquid setup in the same state it was. These assignments I'm talking about only happen in initial well setup. Once you move the liquids around, their serialization number follows them - so if you transfer source plate to empty dest plate, Sample 1 from A1 to dest C1 and Sample 2 from A2 to dest B1, dest plate will have B1: Sample 2, C1: Sample 1 I missed: what happens when you combine liquids in these cases: Sample 1 + Sample 1 Sample 1 + Sample 2 Sample 1 + DNA 2
GITHUB_ARCHIVE
Which was the most effective assignment which I could ever get. Big because of MyAssignmenthelp.com for supplying me the top assist and the most effective programming help for my assignment on DBMS. Getting the ideal programming assignments is difficult from on line resources which will turn out unreliable or are unsuccessful to offer you the highest confidentiality. We assurance safe strategies for conducting your personal business and receiving one of the most certified support along with your programming homework. Inexpensive on line programming help provider has become the explanation why we are getting to be famed among the the students of the U.S. We are really very affordable online programming help support that by no means compromises on excellent. We recognize that several a time students wait to select affordable programming help companies from the USA pondering the standard. Have related confusions? Get in touch with Assignmenthelp.us. Get high quality programming help and at low-priced charges. Understand the art of crafting your very own capabilities in Python, along with critical concepts like scoping and mistake handling. Dice Rolling Simulator is a straightforward sport where we produce a dice simulator and compose the quantities on dice. We are able to determine that how I aspect we're going to acquire in the applying and it'll print the variety that arrives. At some time, Every of us was in the same boat. Many of us had to find out Python and also other programming languages and occasionally needed help. We now present click site our experience for you, so as to catch just a little split out of your college operate. Find out how to go ahead and take Python workflows you currently have and easily scale them around large datasets without the need to have... You men gave me the top assignment. I have A different assignment on World wide web programming. I'd thoroughly enjoy to visit you persons for that. We use simple nonetheless related strategies to demonstrate and address the condition, making it guaranteed that college students seeking programming help will be able to know it with out unnecessary complexities. Have insufficient expertise? Need to have to understand the principle connected to the programming assignment complications? Programming help from Assignmenthelp.us is simply click for source a simply click away to assistance you with the most beneficial-created programming papers. College students could be chaotic with other responsibilities or get stumped Using the programming thoughts. Experienced authorities and programmers are there to manage your Python programming assignments, projects, coursework and reviews. At Any time you’re employing exterior libraries (from PyPI or elsewhere), you'll want to regulate the variations of such in addition. The Pythonic Option for this are virtualenvs (occasionally abbreviated to venv). You may include some feature listed here In case you have any question or want to know more details on this project remember to contact me. Each of such specialists is introduced on board only following a multi-level evaluation mechanism. Whereby, we discover their competency for his or her specialised topic domains, In combination with their hold to the Australian English, and comprehension of different strategies and rules shared by diverse Australian universities. Subscribe now. Help you save fifty% on DataCamp and decide to Understanding facts science and analytics. Present finishes in times hrs mins secs
OPCFW_CODE
Please fill out the fields below so we can help you better. Note: you must provide your domain name to get help. Domain names for issued certificates are all made public in Certificate Transparency logs (e.g. https://crt.sh/?q=example.com), so withholding your domain name here does not increase secrecy, but only makes it harder for us to provide help. My domain is: bloq-e.de I ran this command: no, I ‘installed’ the certificate It produced this output: 443:0 server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) My web server is (include version): apache 2.4 I think, surely above 2.4 The operating system my web server runs on is (include version): no idea My hosting provider, if applicable, is: webgo.de I can login to a root shell on my machine (yes or no, or I don’t know): I don’t know I’m using a control panel to manage my site (no, or provide the name and version of the control panel): no The version of my client is (e.g. output of certbot --version or certbot-auto --version if you’re using Certbot): i don’t use cerbot but manual verification First, I did read the following kind of instructions: https://letsencrypt.org/getting-started/ - sorry, but not helpful I’ve been researching this isssue for a few days by now, and I read sudo, and cerbot and windows server and openSSL what do I know, but nothing clean. I need it clean. I do have an apache on my own computer, just fine, but I’m not a pro. My hoster, however, does support let’sEncrypt, but only for more expensive packages. I recap what I did so far: I hope this contribution will help others, or, after getting through with it, you might add it to your getting started - thing. This link helped me a lot to get started - getting practical: in german, sorry so following this blog, I first went to got two files I loaded up to my server went back to sslforfree.com, clicked the link - so they check, if the two files verily are on my server, and got me three text fields with private key, certificate and ca bundle. Now comes the fun part My host actually offers in my menu ‘add SSL’ and then offers me 4 text fields in which I did paste the text; CRT, CSR (I left this one empty), PrivateKey, and CA (optinal). They don’t provide any instructions, or explanations. Then it activated it, and shifted the text from CA to CSR I do get the message “SSL-Certificate” for this domain. Then I went to they check on the certificate, basically said it’s ok. And it was not, but maybe after 48 hours it would have been. in the mean time, I don’t know why, I get 400 bad request. I read that, although SSL is in the menu, my host does not offer Let’sEncrypt for my packet, unless I pay every month like 15 euros. Anyway, I have my own apache running - part of the deal - and I wanna make it work. Recap / problems first, I have two domains: bloq-e.de and www.bloq-e.de I read I need a certificate for each of them! I also read, I first need to make sure, that none of the above is redirected to the other one. As for now, www.bloq-e.de ist redirected to bloq-e.de but I don’t find no such entry in the host’s httpd.conf I also read, after installing the certificate(s), I have to redirect the http:// to https://, after! redirecting www.bloq-e.de to bloq-e.de Is that true? I also found an instruction on how to upload the certificate files - not the text - onto my server. but the instruction raised more questions than it answered. I got very nervous when I read I was supposed to upload the privateKey to a safe place??? What’s that? How do I do that? Then I again and again read: I first have to create a virtual host in apache. On port 443. which requires the serverName. And I don’t know this. Or may, since I upload my files via FileZilla / FTP, and there I have to enter the server, it maybe is “s192.goserver.host”. So what I’m trying to tell you here is: I’m totally lost. And I’d greatly appreciate to get the picture of the installation Do I have shell-access? I don’t know. This is a weird host, who turns customers into children. I can access / overwrite the httpd.conf, but … yeah well, the host gives me a text field with all entries they made on the apache server - each client has their ‘own’ server running. Actually, it’s not clear that these are all entries. And in my root, there is a htaccess file which is plain empty.
OPCFW_CODE
Java is one of the most widely used programming languages in the world. Talking about its popularity, more than nine million developers considered the Java Programming language as their mother tongue. So there is no doubt about Java’s popularity worldwide. Java is a general-purpose, object-oriented programming language that first appeared in 1995. Despite being released over 25 years ago, Java maintains its position among the top 3 programming languages according to the TIOBE Index for February 2022. In the current era, Java is one of the most famous programming languages as it can be utilized to design highly scalable applications that are light & fast and serve a variety of purposes. The demand for Java Developers remains strong, even with competition from new languages. Java is one of the most in-demand programming languages on the job market, relying on whose numbers you look at. From selecting your first programming language to building highly scalable applications, Java has been everyone’s favorite. But the first question appears who is a Java Developer? Who is a Java Developer? A Java developer is a specialized programmer or you can say coder who teams up with software engineers to combine Java into business software, applications, and websites. A Java developer is responsible for multiple responsibilities during the development cycle of applications. Following are some of the major responsibilities they serve: - Design, Implement and Maintain Java Application - Involved in software analysis, testing, coding, and debugging - Transform requirements into stipulations - Recommending modifications to advance established Java application - Develop technical designs for app development, etc. The average Java developer salary in India is Rs 443,568 per annum. The salary could differ from INR 202,602 to about INR 1,102,825 per annum relying on factors like experience level, location, company profile, etc. So now it is clear why one should become a Java Developer. Now the questions that arise are, how to start? Where to start? What topics one should cover? The answer to these questions is the Java Backend Development Live Course which provides you with the best Placement Assistance. There are numerous services that are included in this course, such as resume building services, priority resume selection, and assistance in interview selection. Apart from this, if you need to learn all the concepts from a book or you should go with some online tutorials or you should learn Java Development by doing some projects on it? In this article, let’s discuss all these things in detail. Roadmap to Learn Java Start with the Overview of Java. Read some Java Development-related blogs and also research some Java Development-related things. For example read blogs on Introduction to Java, History of Java, and also topics like Is it Worth to Become a Java Developer in 2022, etc., etc., and make a complete mind makeup to start your journey on Java Development. Make yourself self-motivated to learn Java Development and build some awesome projects using Java. Do it regularly and also start learning one by one new concepts. It will be great to join some workshops or conferences on Java Development before you start your journey. The best way to become a Java Developer is by developing some mini projects to some advanced projects. These Top 7 Java Project Ideas To Enhance Programming Skills will definitely help you. 1) Core Java In this particular Java Developer roadmap, the first thing you need to learn is Core Java. So in the core java, you need to learn the following major topics: - Data Types and Variables - Features and Architecture - Operator and Expressions - String Class - Conditional Statements and Loops - OOPs Concept in Depth - JAVA IO Streams - Collection Framework - Java 8 2) Advanced Java After learning Core Java you need to learn the Advanced Java concepts. In this particular Advanced Java, you need to learn the following topics In Advanced Java, you don’t need to learn everything in detail but you just be aware of all the above things and how those are working. And whenever you are working on these particular things you can read about them and you can implement them in your project. Once you are well versed with the core and advanced java you should be able to code java applications. So for that, you should know any of the following IDE’s - IntelliJ IDEA - Spring Tool Suite - VS Code You must be aware of the different shortcuts for your IDE whichever you are preferring so that it will optimize your workflow and increase your productivity for developing applications. 4) Build Tools Now all your java code should be built. So for building your Java project you must be aware of the different build tools. Some of the amazing tools that you can use for building your Java project are: Once you have created your web application it’s time to deploy it. So you will be deploying your web application on the server so you must be aware of at least one server that you can work with. Tomcat is the most widely used server. Apart from that JBOSS is also used in many places. Database plays an integral role in creating a Java application as storing data is a crucial aspect. So if you work in any organization you have to work with databases. You have to write the queries to execute different operations on a database. So some of the topics that you need to learn are: - Advanced SQL - ORM (Object-Relational Mapping)Framework - JPA (Java Persistence API) - Spring Data JPA Testing is a very important phase during your development journey. So you must be aware of how to test your Java application to minimize error and maximize efficiency. - Unit Testing - Integration Testing - Debugging Code (Must know) There are different logging libraries are available in Java. So whenever you create the Java application you should log your errors or files to get information about what’s happening in the system. You must be aware of any of the following Logging libraries There are a lot of different frameworks are available in Java. These frameworks foster easy debugging, extensive code reusability, improved code efficiency, and reduce the overall development time. So mentioned below are some of the Java Frameworks that you can learn: - Spring Boot These 10 Most Popular Java Frameworks That You Must Try tells about some popular frameworks you can use in your projects. 10) Keep Practicing “Practice makes a man perfect” this phrase manifests the importance of continuous practice and learning. So keep learning, practicing, and stay updated.
OPCFW_CODE
// This script licensed under the MIT. // http://orgachem.mit-license.org /** * Namespace for string utilities. * @namespace * @exports lib/string */ var string = exports; /** * Fills with characters between a head and a tail. Word wrraps if header and * tail width is too longer. * @param {string} head Head string. * @param {string} tail Tail string. * @param {number} tw Text width. * @param {?string=} opt_char Optional character to insert. Default is a white * space. * @return {string} Builded string. */ string.fillMiddle = function(head, tail, tw, opt_char) { var headerWidth = head.length; var tailWidth = tail.length; var charWidth = tw - headerWidth - tailWidth; var char = opt_char || ' '; if (charWidth <= 0) { charWidth = tw - tailWidth; return [head, '\n', string.repeat(char, charWidth), tail].join(''); } else { return [head, string.repeat(char, charWidth), tail].join(''); } }; /** * Pull right the given string. * @param {string} str String to pull right. * @param {number} tw Text width. * @return {string} Builded string. */ string.pullRight = function(str, tw, opt_char) { var whiteWidth = tw - str.length; if (whiteWidth < 0) { throw Error('Given string is too long: "' + str + '"'); } return string.repeat(' ', whiteWidth) + str; }; /** * Converts multiple whitespace chars (spaces, non-breaking-spaces, new lines * and tabs) to a single space, and strips leading and trailing whitespace. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str Input string. * @return {string} A copy of {@code str} with collapsed whitespace. */ string.collapseWhitespace = function(str) { // Since IE doesn't include non-breaking-space (0xa0) in their \s character // class (as required by section 7.2 of the ECMAScript spec), we explicitly // include it in the regexp to enforce consistent cross-browser behavior. return str.replace(/\s+/g, ' ').replace(/^\s+|\s+$/g, ''); }; /** * Removes the breaking spaces from the left and right of the string and * collapses the sequences of breaking spaces in the middle into single spaces. * The original and the result strings render the same way in HTML. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str A string in which to collapse spaces. * @return {string} Copy of the string with normalized breaking spaces. */ string.collapseBreakingSpaces = function(str) { return str.replace(/[\t\r\n ]+/g, ' ').replace( /^[\t\r\n ]+|[\t\r\n ]+$/g, ''); }; /** * Trims white spaces to the left and right of a string. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str The string to trim. * @return {string} A trimmed copy of {@code str}. */ string.trim = function(str) { // Since IE doesn't include non-breaking-space (0xa0) in their \s character // class (as required by section 7.2 of the ECMAScript spec), we explicitly // include it in the regexp to enforce consistent cross-browser behavior. return str.replace(/^\s+|\s+$/g, ''); }; /** * Trims whitespaces at the left end of a string. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str The string to left trim. * @return {string} A trimmed copy of {@code str}. */ string.trimLeft = function(str) { // Since IE doesn't include non-breaking-space (0xa0) in their \s character // class (as required by section 7.2 of the ECMAScript spec), we explicitly // include it in the regexp to enforce consistent cross-browser behavior. return str.replace(/^\s+/, ''); }; /** * Trims whitespaces at the right end of a string. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str The string to right trim. * @return {string} A trimmed copy of {@code str}. */ string.trimRight = function(str) { // Since IE doesn't include non-breaking-space (0xa0) in their \s character // class (as required by section 7.2 of the ECMAScript spec), we explicitly // include it in the regexp to enforce consistent cross-browser behavior. return str.replace(/\s+$/, ''); }; /** * Repeats a string n times. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} string The string to repeat. * @param {number} length The number of times to repeat. * @return {string} A string containing {@code length} repetitions of * {@code string}. */ string.repeat = function(string, length) { return new Array(length + 1).join(string); }; /** * Truncates a string to a certain length and adds {@code '...'} if necessary. * The length also accounts for the ellipsis, so a maximum length of 10 and a * string {@code 'Hello World!'} produces {@code 'Hello W...'}. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str The string to truncate. * @param {number} chars Max number of characters. * @param {boolean=} opt_protectEscapedCharacters Whether to protect escaped * characters from being cut off in the middle. * @return {string} The truncated {@code str} string. */ string.truncate = function(str, chars, opt_protectEscapedCharacters) { if (opt_protectEscapedCharacters) { str = string.unescapeEntities(str); } if (str.length > chars) { str = str.substring(0, chars - 3) + '...'; } if (opt_protectEscapedCharacters) { str = string.htmlEscape(str); } return str; }; /** * Unescapes an HTML string. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} str The string to unescape. * @return {string} An unescaped copy of {@code str}. */ string.unescapeEntities = function(str) { if (string.contains(str, '&')) { // Fall back on pure XML entities return string.unescapePureXmlEntities_(str); } return str; }; /** * Unescapes XML entities. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @private * @param {string} str The string to unescape. * @return {string} An unescaped copy of {@code str}. */ string.unescapePureXmlEntities_ = function(str) { return str.replace(/&([^;]+);/g, function(s, entity) { switch (entity) { case 'amp': return '&'; case 'lt': return '<'; case 'gt': return '>'; case 'quot': return '"'; default: if (entity.charAt(0) == '#') { // Prefix with 0 so that hex entities (e.g. &#x10) parse as hex. var n = Number('0' + entity.substr(1)); if (!isNaN(n)) { return String.fromCharCode(n); } } // For invalid entities we just return the entity return s; } }); }; /** * Checks whether a string contains a given substring. * * This method is clone of * {@link http://closure-library.googlecode.com/svn/docs/index.html}. * * @param {string} s The string to test. * @param {string} ss The substring to test for. * @return {boolean} True if {@code s} contains {@code ss}. */ string.contains = function(s, ss) { return s.indexOf(ss) != -1; };
STACK_EDU
You could potentially utilize a function collection or function worth process to your PCA outcomes when you desired. It'd be overkill even though. Let us initially have a look at the Python file we have just produced. The stub has just a single line: Access 43 lectures & seven several hours of articles 24/7 Establish a receipt segmenter to seek out text in a picture Rely cash & dollar costs in a picture right after making a currency counter Uncover Legend of Zelda rupees utilizing a pattern matching algorithm Style a deal with swapping app Explore the mathematical theory & procedures guiding computer eyesight Fully grasp elementary Pc eyesight & graphic processing techniques Do you've got any questions on attribute choice or this article? Check with your issues during the comment and I'll do my most effective to reply them. PhD, former university instructor and computer software engineer with twenty years of program progress working experience in MATLAB, Python To complete feature selection, we ought to have ideally fetched the values from Just about every column of your dataframe to check the independence of each and every feature with The category variable. Could it be a inbuilt operation of the sklearn.preprocessing beacuse of which you fetch the values as Just about every row. By clicking "Submit Your Respond to", you accept that you've go through our updated phrases of company, privateness coverage and cookie coverage, and that your ongoing usage of the website is subject to these guidelines. I'm not guaranteed in regards to the other procedures, but function correlation is an issue that needs to be addressed before assessing aspect significance. Within this tutorial we’ll make a very simple Python script, so we’ll choose Pure Python. This template will make an empty project for us. It has to be by doing this, considering that unnamed parameters are described by posture. We could outline a purpose that normally takes You may see that we have been supplied an great importance score for every attribute the place the much larger score the more important the attribute. The scores advise at the value of plas I'm new to ML and am undertaking a project in Python, at some point it is actually to recognize correlated options , I wonder what would be the following phase? How can I do know which aspect is much more essential for that product if there are actually categorical features? Is there webpage a way/method to calculate it in advance of 1-warm encoding(get_dummies) or how you can calculate immediately after one particular-sizzling encoding If your product is just not tree-based? Also, note that you may well be provided a proper English sentence "Equipped was I ere I noticed Elba." with punctuation. Your palindrome checker could have to quietly skip punctuation. Also, you'll have to quietly match with out taking into consideration situation. This is a little a lot more elaborate.
OPCFW_CODE
Vim: Compiling HowTofor Vim on Unix-like systems Last change: 2015-12-24 02:23 UTC Back to "The Vim Editor" Back to Welcome page export variable='value'There is one difficulty: you cannot just "run" this script (by giving its name to bash as a command name) you must "source" it each time you are ready to start a new compilation. But this actually belongs in a further paragraph, and we shall repeat it there. Here's the sample script I use; it is fairly general, and any features for which you don't have the required "development" packages will be eliminated at configure-time: #!/bin/bash export CONF_OPT_GUI='--enable-gnome-check' export CONF_OPT_PERL='--enable-perlinterp' export CONF_OPT_PYTHON='--enable-pythoninterp' export CONF_OPT_TCL='--enable-tclinterp' # /usr/bin/tclsh (softlink) is correctly set export CONF_OPT_RUBY='--enable-rubyinterp' export CONF_OPT_LUA='--enable-luainterp' export CONF_OPT_MZSCHEME='--disable-mzschemeinterp' #export CONF_OPT_PLTHOME='--with-plthome=/usr/local/plt' export CONF_OPT_CSCOPE='--enable-cscope' export CONF_OPT_MULTIBYTE='--enable-multibyte' export CONF_OPT_FEAT='--with-features=huge' export CONF_OPT_COMPBY='"--email@example.com"'You may use that, except that you will of course have to put your name, not mine, in the last line; and you may or may not have to insert something like --with-tclsh=tclsh8.6 at the end of the CONF_OPT_TCL line if configure does not detect the TCL version correctly. export CONF_ARGS2='--with-vim-name=vi'to avoid ending up with several executables all of the same name. Note that you can only have at most one value for each environment variable, but you can concatenate several configure command-line argument (space-separated) in one quoted value, e.g. export CONF_ARGS2='--with-vim-name=vim-mod --with-modified-by="John Doe & Mary Roe"'Notice the use of both single and double quotes in the above example. (date && hg pull -u) 2>&1 |tee -a ../hgvim.logor if you did (and have installed the fetch extension) (date && hg fetch --switch-parent) 2>&1 |tee -a ../hgvim.log make reconfig 2>&1 |tee ../make-vim.log make 2>&1 |tee ../make-vim.log cd src/tiny let's source the configure arguments into this shell's environment: this file does not exist by default, you must have set it up like mycfg in the above case, but with the configure arguments for this particular build: source tinycfg.sh make 2>&1 |tee tinymake.log Check that all went well: ./vi --version |more and if it looks OK: make installvimbin 2>&1 |tee tinyinst.log ls -l `which vi` |Back to "The Vim Editor"||Back to Welcome page|
OPCFW_CODE
The chief power of HTML comes from its ability to link text and/or an image to another document or section of a document. A browser highlights the identified text or image with color and/or underlines to indicate that it is a hypertext link (often shortened to hyperlink or just link). HTML's single hypertext-related tag is , which stands for anchor. To include an anchor in your document: start the anchor with specify the document you're linking to by entering the parameter HREF="filename" followed by a closing right angle bracket (>) enter the text that will serve as the hypertext link in the current document enter the ending anchor tag: (no space is needed before the end anchor tag) Here is a sample hypertext reference in a file called US.html: This entry makes the word Maine the hyperlink to the document MaineStats.html, which is in the same directory as the first document. Relative Pathnames Versus Absolute Pathnames You can link to documents in other directories by specifying the relative path from the current document to the linked document. For example, a link to a file NYStats.html located in the subdirectory AtlanticStates would be: These are called relative links because you are specifying the path to the linked file relative to the location of the current file. You can also use the absolute pathname (the complete URL) of the file, but relative links are more efficient in accessing a server. They also have the advantage of making your documents more "portable" -- for instance, you can create several web pages in a single folder on your local computer, using relative links to hyperlink one page to another, and then upload the entire folder of web pages to your web server. The pages on the server will then link to other pages on the server, and the copies on your hard drive will still point to the other pages stored there. It is important to point out that UNIX is a case-sensitive operating system where filenames are concerned, while DOS and the MacOS are not. For instance, on a Macintosh, "DOCUMENT.HTML", "Document.HTML", and "document.html" are all the same name. If you make a relative hyperlink to "DOCUMENT.HTML", and the file is actually named "document.html", the link will still be valid. But if you upload all your pages to a UNIX web server, the link will no longer work. Be sure to check your filenames before uploading. Pathnames use the standard UNIX syntax. The UNIX syntax for the parent directory (the directory that contains the current directory) is "..". (For more information consult a beginning UNIX reference text such as Learning the UNIX Operating System from O'Reilly and Associates, Inc.) If you were in the NYStats.html file and were referring to the original document US.html, your link would look like this: In general, you should use relative links whenever possible because: it's easier to move a group of documents to another location (because the relative path names will still be valid) it's more efficient connecting to the server there is less to type However, use absolute pathnames when linking to documents that are not directly related. For example, consider a group of documents that comprise a user manual. Links within this group should be relative links. Links to other documents (perhaps a reference to related software) should use absolute pathnames instead. This way if you move the user manual to a different directory, none of the links would have to be updated. The World Wide Web uses Uniform Resource Locators (URLs) to specify the location of files on other servers. A URL includes the type of resource being accessed (e.g., Web, gopher, FTP), the address of the server, and the location of the file. The syntax is: scheme://host.domain [:port]/path/ filename where scheme is one of a file on your local system a file on an anonymous FTP server a file on a World Wide Web server a file on a Gopher server a file on a WAIS server a Usenet newsgroup a connection to a Telnet-based service The port number can generally be omitted. (That means unless someone tells you otherwise, leave it out.) For example, to include a link to this primer in your document, enter: NCSA's Beginner's Guide to HTML This entry makes the text NCSA's Beginner's Guide to HTML a hyperlink to this document. There is also a mailto scheme, used to hyperlink email addresses, but this scheme is unique in that it uses only a colon (:) instead of :// between the scheme and the address. You can read more about mailto below. For more information on URLs, refer to: WWW Names and Addresses, URIs, URLs, URNs A Beginner's Guide to URLs Links to Specific Sections Anchors can also be used to move a reader to a particular section in a document (either the same or a different document) rather than to the top, which is the default. This type of an anchor is commonly called a named anchor because to create the links, you insert HTML names within the document. This guide is a good example of using named anchors in one document. The guide is constructed as one document to make printing easier. But as one (long) document, it can be time-consuming to move through when all you really want to know about is one bit of information about HTML. Internal hyperlinks are used to create a "table of contents" at the top of this document. These hyperlinks move you from one location in the document to another location in the same document. (Go to the top of this document and then click on the Links to Specific Sections hyperlink in the table of contents. You will wind up back here.) You can also link to a specific section in another document. That information is presented first because understanding that helps you understand linking within one document. Links Between Sections of Different Documents Suppose you want to set a link from document A (documentA.html) to a specific section in another document (MaineStats.html). Enter the HTML coding for a link to a named anchor: In addition to the many state parks, Maine is also home to Acadia National Park. Think of the characters after the hash (#) mark as a tab within the MaineStats.html file. This tab tells your browser what should be displayed at the top of the window when the link is activated. In other words, the first line in your browser window should be the Acadia National Park heading. Next, create the named anchor (in this example "ANP") in MaineStats.html: With both of these elements in place, you can bring a reader directly to the Acadia reference in MaineStats.html. NOTE: You cannot make links to specific sections within a different document unless either you have write permission to the coded source of that document or that document already contains in-document named anchors. For example, you could include named anchors to this primer in a document you are writing because there are named anchors in this guide (use View Source in your browser to see the coding). But if this document did not have named anchors, you could not make a link to a specific section because you cannot edit the original file on NCSA's server. Links to Specific Sections within the Current Document The technique is the same except the filename is omitted. For example, to link to the ANP anchor from within MaineStats, enter: ...More information about Acadia National Park is available elsewhere in this document. Be sure to include the tag at the place in your document where you want the link to jump to (Acadia National Park). Named anchors are particularly useful when you think readers will print a document in its entirety or when you have a lot of short information you want to place online in one file. You can make it easy for a reader to send electronic mail to a specific person or mail alias by including the mailto attribute in a hyperlink. The format is: For example, enter: NCSA Publications Group to create a mail window that is already configured to open a mail window for the NCSA Publications Group alias. (You, of course, will enter another mail address!)
OPCFW_CODE
Novel–I’m Secretly Married to a Big Shot–I’m Secretly Married to a Big Shot Chapter 2468 aback absurd Wei Zheng responded promptly: [Indeed, Chairman Mo. Unwell get it done right away.] Thus, he couldnt just allow this to topic relax. It was actually simple for the Mo Company to handle the Shen Business. Particularly when they had just recovered and ended up far less strong than once they have been in their perfect. When Qiao Mianmian didnt submit anything at all, he wouldnt embark on Weibo. How dare the Internet Liquid Army power him to become a partners with her? The mans black view ended up gloomy, his slim palms ended up clenched, with his fantastic skinny mouth area were pursed tightly. Regardless that Mo Yesi wasnt during the fun field, he could tell that somebody was deliberately environment the rate. He suspected that Qiao Mianmians bizarre problems have been linked to Weibo. with or without him Mo Yesi believed in Wei Zhengs capacity. He does all the things he questioned him to. Immediately after educating Wei Zheng, he checked down for the resting girl in the biceps and triceps. Providing Shen Rou didnt do anything whatsoever negative to Qiao Mianmian, Mo Yesi wouldnt brain her removing several of the Mo Enterprises business. Soon after informing Wei Zheng, he looked down with the getting to sleep lady in the forearms. He speedily browsed via the entire Weibo web page and spotted some trending topics. Mo Yesi possessed never completed anything to the Shen Corporation prior to. He thought that Qiao Mianmians strange concerns have been in connection with Weibo. Particularly when that they had just retrieved and were definitely far weaker than whenever they were on their leading. Despite the fact that he recognized that Shen Rou was secretly stealing the Mo Corporations small business, it wasnt crucial that you him. He only obtained Wei Zheng to monitor the Shen Organization and didnt want to deal with them. Classic Madam had encouraged him that the Shen and Mo people obtained decades of good friends.h.i.+p. She advised him not to ever make issues too unsightly or too extreme. He thought that Qiao Mianmians odd problems had been related to Weibo. Mo Yesi presumed in Wei Zhengs skill. He did all the things he requested him to. It was subsequently possible for the Mo Organization to handle the Shen Company. Hence, he couldnt just let this subject rest. The mans dim vision were definitely gloomy, his toned palms had been clenched, and the very thin mouth were pursed tightly. It was subsequently very easy to speculate who has been behind it. However, if it concerned Qiao Mianmian, it will be crossing his financial well being. footprints in the forest castellano It had been an easy task to suppose who has been behind it. It turned out she noticed those messy trending issues. Mo Yesis eye transformed cool. He picked up his cell phone again and sent Wei Zheng a note. Even though Mo Yesi wasnt in the fun sector, he could notify that somebody was deliberately environment the rate. Following educating Wei Zheng, he checked down with the asleep woman as part of his forearms. # Shen Rou Crushes Qiao Mianmian # It proved that she discovered those untidy trending ideas. define ordained by god Mo Yesi listened to her. He only settled focus to Qiao Mianmian. Providing Shen Rou didnt do just about anything poor to Qiao Mianmian, Mo Yesi wouldnt imagination her removing some of the Mo Companies enterprise. A Journal of Two Campaigns of the Fourth Regiment of U.S. Infantry No wonder she suddenly inquired him those weird queries. Mo Yesis expression darkened because he browse the head lines. Shortly, Mo Yesi recorded into Weibo. Furthermore Qiao Mianmians posts, he possessed little else. He sought to be aware what she found. After telling Wei Zheng, he searched down at the slumbering female as part of his hands. The Golden Shoemaker Mo Yesis eyeballs changed freezing. He found his cellular phone again and forwarded Wei Zheng information. Even though Mo Yesi wasnt on the pleasure field, he could tell that somebody was deliberately placing the schedule. ← Prestantiousfiction Infinite Mana In The Apocalypseblog – Chapter 1175 – Making Antiquities Is Easy! I report sock suggest-p3
OPCFW_CODE
function checkCharacter(character, data, withReasoning) { var active_cell = ss.getActiveCell(); var lr = data.getLastRow(); var found = 0 var seriesList = [] // clear any data validations before getting started active_cell.offset(0, 1).clear().clearDataValidations(); active_cell.offset(0, 2).clear().clearDataValidations(); // get the list of available characters to reject var char_list = data.getRange(2, 2, lr - 1).getValues(); // if the character is found, push them to the "found" list for(i = 0; i < char_list.length; i++) if(character == char_list[i][0]) { found++; seriesList.push(data.getRange(i + 2, 5).getValue()); } // if a character cannot be found, print 'character' is not on the list if(found == 0) { active_cell.offset(1, 0).setValue(character + " is not on the list."); return } // if there is more than one character found, make a form to select the series if(found > 1) { var seriesRule = SpreadsheetApp.newDataValidation().requireValueInList(seriesList).build(); active_cell.offset(0, 1).setDataValidation(seriesRule); } // otherwise, automatically fill in the series else active_cell.offset(0, 1).setValue(seriesList[0]); // log how many were found in the list active_cell.offset(1, 0).setValue(found + " found on the list."); // if rejecting or changing reasons, give a list of possible reasons if(withReasoning) { var startRowReasons = 3; var colIndexReasons = getReasonsColIndex(); var colLastRow = getReasonsLastRow(); var activeRow = active_cell.getRowIndex(); var validationRange = cs.getRange(startRowReasons, colIndexReasons, colLastRow - startRowReasons + 1); var validationRule = SpreadsheetApp.newDataValidation().requireValueInRange(validationRange).build(); active_cell.offset(0, 2).setDataValidation(validationRule); } } function getLastRowofColumn(ss, col) { var lr = ss.getLastRow(); for (i = lr; i > 0; i--) { var cellValue = ss.getRange(i, col).getValue(); if (cellValue != '') return i; } return 0; } function getReasonsLastRow() { var cs = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Control Panel"); return getLastRowofColumn(cs, getReasonsColIndex()); } function getReasonsColIndex() { var cs = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Control Panel"); var firstRowContents = cs.getRange(1, 1, 1, cs.getLastColumn()).getValues(); return firstRowContents[0].indexOf("Reasons") + 1; }
STACK_EDU
What's in this article? Machine learning has become the buzzword in virtually every industry that is looking to leverage relevant data to better compete. Consumer direct lenders understand the need to strategically manage critical data assets. However, where machine learning fits into the equation is not always as clear and implementation decisions surrounding the technology can be equally daunting. Apprehension around relying on machine learning technology usually boils down to three issues: A misunderstanding of how the technology works, a belief that the technology is too sophisticated for certain tasks, and a fear that the technology is a replacement for individual subject matter expertise (also referred to as domain expertise). In this article, we work to bust those myths and explain how machine learning technology complements a lender’s existing lead management strategies. How does Machine Learning work? At the core, machine learning is essentially a method of applying a mathematical construct to a repetitive process. Machine learning applies mathematical equations to specific processes and automatically learns to predict outcomes. The technology relies on algorithms to analyze massive amounts of data and deliver predictions to achieve better business results over time. A human with domain expertise can generally perform some of these tasks manually or through simple automations but machine learning amplifies the value delivered by performing the tasks in an automated, repeatable, and robust manner. Whereas a human is generally limited to optimizing two or maybe three dimensions at one time, a machine can perform global optimizations on hundreds of dimensions with ease. The automation allows the machine to find multi-dimensional correlations that would be nearly impossible to find manually. Machine learning involves fitting existing data to an equation to predict an outcome of a critical business process. The capabilities of machine learning allow organizations to have access to an automated, unbiased view of critical business issues and to identify trends that a human would be unlikely to find. What’s the Difference between Machine Learning and AI? By definition, AI refers to using a computer to automate a task generally performed by humans. A simple example of AI is a human-generated decision tree (e.g., fixed rules or workflows) that assigns East and West Coast leads to different groups of loan officers. Historically, a human may have performed this task manually, but modern lead management systems allow an administrator to automate this assignment process. In other words, many consumer direct lenders are likely using some form of AI today. Machine learning is a subset of AI in which algorithms are used to automatically learn from data to optimize a specific process or outcome. Machine learning allows for the prediction of an outcome in an automated, repeatable, and robust way. This systematic approach frees up time and resources to work on other business issues that are facing the organization. AI is the technology that allows for the creation of systems that can simulate human intelligence, while machine learning learns from past data to predict future outcomes. While AI allows computers to automate human behavior, machine learning takes this a step further by applying algorithms to predict a specific outcome using historical data. Misconceptions about Machine Learning The first misconception about machine learning is that the technology is only useful for solving very complex problems. On the contrary, machine learning often provides the greatest value to simple problems. The automated, repeatable, and robust nature of the technology allows its users to eliminate time-consuming, repeatable tasks from existing processes. The peace of mind that comes with knowing a task is being executed in a reliable, accurate, and consistent manner is often a large component of the value proposition for machine learning. The second misconception about machine learning is that it is a replacement for existing processes or human resources. Machine learning generally works best as an extension of existing processes and in combination with human domain expertise. Machine learning will use mathematics to uncover process strengths and identify potential areas for improvement. Historical data has many stories to tell and surfacing that information through machine learning applications will improve processes over time. Machine learning supplements domain expertise to amplify results in an automated, repeatable, and robust way. And it does so without the biases that exist within human decision making. Domain experts are still an essential part of the process because they need to define and provide context for each problem they are trying to solve. In fact, the development of ProPair’s products were inspired by customer-driven insights from problems they were encountering across their sales teams. Customers identified their challenges and the ProPair team worked to build machine learning products that address those needs. ProPair’s goal is to apply the power of machine learning to enable consumer direct lenders to improve their market position. Machine learning, like many technologies, is only as powerful as the people behind the scenes. For example, every sales team has its strengths and weaknesses, but the ProPair technology amplifies the strengths while minimizing the weaknesses. This advancement can tip the scales in favor of those leveraging the technology. Those that move early can start building the competitive edge needed to stand out in this crowded market before the opportunity has passed.
OPCFW_CODE
implements the ODBC driver interface for ADBC (acdk_sql, similar to JDBC) written in PERL/TK. The database does full searches for, Title, Author, Publisher, ISBN, Catalog Number. In addition it has full edit and delete capabilities cbl2pg is a library to connect Cobol programs to PostgreSQL databases. a fast, reliable, lightweight package for creating and reading constant databases cdbxx is a small STL style C++ library for TinyCDB implementation of Constant Database. It provides iterators, data adapters and high level interfaces for databases. Courier Authentication Library The Courier Authentication Library is an API toolkit for implementing password validation and account metadata lookups. Authentication can be configured using either the traditional system password file lookups, or using MySQL, PostgreSQL, LDAP, or DB/GDBM databases. D_store is an alternative database-like storage library. It is very good at tagging, live searching, hierarchies, and any other kind of relationships that your data has. Its primary goal is an enabler for applications with lots of related data to be explored and manipulated by the user, with live search fields, while still remaining fast with lots of items. Database Access Made Easy: a set of tools and libraries to remove the dredgery of manually writing database access code. DAME is developed in C++ on Linux(TM) using Antlr parser generator Database Template Library goal of this library is to make ODBC recordsets look just like an STL container a lightweight C++ library for database abstraction and object/relational mapping. It is targeted at C++ developers who need to access databases, and consists of a set of interface classes and some drivers for common data sources. Currently available drivers are for mysql, postgresql, sqlite databases a database library implementing a first-in-first-out cache. The database (or `cache') size limits are configured at creation time, and as soon as they are reached the oldest records are automatically removed a database library allowing for multiple reading and one writing process at any time (commercial) FIBPlus is a fast, flexible and high-performance component library for Borland's Delphi 3-7, C++ Builder 3-6, and Kylix 3, intended for work with Borland InterBase and Firebird using the direct Interbase API. provides an easy way for C language programmers to connect to Firebird database servers (and provably Interbase). It uses the Firebird/Interbase API FlexiRecord is a ruby library for object oriented access to databases. Each table is represented by a class, each row of that table is represented by an object of that class. This library is especially aimed to properly support database transactions. re-implementation of Sybase's dblib library for Linux. It is available under the LGPL a set of database routines that use extensible hashing. It works similar to the standard UNIX dbm routines. to facilitate automatic, RDBMS independent persistency (data access) for your business objects and custom queries. It is implemented using custom attributes and reflection in C#. Beginners should be able to use the framework immediatedly GeoTypes is a Python library that implements both the OpenGIS/PostGIS and standard PostgreSQL geometry types. It integrates with the psycopg Python/PostgreSQL interface. It provides implementations of all of the OpenGIS/PostGIS classes, except (x,y,m) and (x,y,z,m). It currently supports the EWKB, HEXEWKB, WKB, and WKT formats. a complete suite of libraries/applications that allow an easy access to a wide range of database systems by making use of the capabilities provided by CORBA for communication between separated modules. The CORBA implementation used by GNOME-DB is ORBit, the GNOME Project's ORB GNUstep Database Library The GNUstep Database Library 2 (GDL2) is a set of libraries to map Objective-C objects to rows of relational database management systems (RDBMS). It aims to be compatible with Enterprise Objects Framework (EOF) as released with WebObjects 4.5 from Apple Inc. a C++ library for communicating with SQL databases in a generic way a database generic interface. The main lib provide a way to use dbds (data base drivers) which are linked at run-time a cross platform library for reading Microsoft access ( MDB / jet) databases, for data export and recovery. a combinator library for expressing queries and other operations on relational databases in a type safe and declarative way. All the queries and operations are completely expressed within Haskell, no embedded (SQL) commands are needed hiberlite provides C++ object-relational mapping for SQLite 3. Its design and API are inspired by the Boost.Serialization, which means there is almost no API to learn. a set of C++ libraries which allow the rapid development of database applications with all features a modern database application should have like forms and reports an Erlang library providing the needed functionality to store XML data into a mnesia database Itzam is a deliberately portable, low-level C library that can (and will) be wrapped in C++, Java, Python, and Fortran 95. The Itzam core library creates and manipulates files containing variable-length, random access records. Information is referenced by a user-defined key value; indexes may be combined with or separate from data. libdbi implements a database-independent abstraction layer in C, similar to the DBI/DBD layer in Perl. Writing one generic set of code, programmers can leverage the power of multiple databases and multiple simultaneous database connections by using this framework. The libdbi-drivers project hosts the database-specific drivers for libdbi, a database abstraction library written in C. Currently, drivers are available for Firebird/Interbase, FreeTDS (MS-SQL and Sybase client), mSQL, MySQL, Oracle, PostgreSQL, and SQLite/SQLite3. libdrizzle is the client and protocol library for the Drizzle project. libinterbase++ is a just thin C++ layer on top of the original Interbase C API written by Borland; of course, it is also compatible with its opensource equivalent, Firebird. a C++ library that provides an interface to several common Database Management Systems (DBMS). It enables the programmer to write application code that can be built and run unchanged on a variety of platforms and against several DBMS Libodbc++ is a c++ class library for accessing SQL databases. It is designed with standards in mind, so it provides a subset of the well-known JDBC 2.0(tm) and runs on top of ODBC. It is distributed under the LGPL. libpackman provides a single API for opening multiple package formats and package databases. libpqxx is the C++ front-end or driver (in the sense of "language binding") for PostgreSQL. It can help you write a program that uses PostgreSQL as a database on just about any platform. The PreludeDB Library provides an abstraction layer upon the type and format of the database used to store IDMEF alerts. It allows developers to use the Prelude IDMEF database easily and efficiently without worrying about SQL, and to access the database independently of the type/format of the database. provides a way to support multiple database management systems in an application with negligeable overhead, in terms of code as well as system resources A C++ library that wraps ODBC calls into an object oriented interface which resembles a subset of Java's JDBC. Libsqlfs is a library, used in conjunction with the popular open source SQLite database software. Libsqlfs allows applications to put an entire read/write file system into a relational database as a single file in the host file system. a simple C-library to access Oracle via the OCI interface LiquiBase is a DBMS-independent library for tracking, managing, and applying database changes. It is built on a simple premise: all database changes (structure and data) are stored in an XML-based descriptive manner and checked into source control. LiquiBase aims to provide a solution that supports merging of changes from multiple developers, works well with code branches, supports a database refactoring IDE/plugin, and more. LiteSQL is a C++ library that integrates C++ objects tightly to relational database and thus provides an object persistence layer. LiteSQL supports SQLite3, PostgreSQL and MySQL as backends. LiteSQL creates tables, indexes and sequences to database and upgrades schema when needed. a library that manages archives using a simple binary file format designed to store meta and user data side by side. A mar file may be useful in situations where a full-scale database is unsuitable a planned set of libraries and utilities to facilitate exporting data from MS Access databases (mdb files) into a multiuser database such as Oracle, Sybase, DB2, Informix, MySQL, Postgresql, or similar. a persistence layer generator application based on the persistence module of the MetaL compiler engine. It is capable of generating the necessary software components to implement a persistence layer API from a description in a format based on XML named Component Persistence Markup Language created to have a small fast isam/btree library for record based access. There is no sql interface available or planned MySQL Global User Variables UDF MySQL Global User Variables UDF is a shared library that adds simple user functions to MySQL in order to keep persistent shared variables in memory. NNDB is a C++ library that provides in-memory data storage and retrieval using syntax that is intended to feel natural for C++/STL
OPCFW_CODE
[Samba] ADS Domain Member Workgroup vs Realm lists at kiuni.de Tue Feb 24 00:25:14 MST 2015 I would say that workgroup = ZARTMAN should be right. Workgroups normally don't have dots in their name. Than you should also try: idmap config ZARTMAN:backend = ad idmap config ZARTMAN:schema_mode = rfc2307 idmap config ZARTMAN:range = 10000-99999 One more thing is that it's not recommended to have a .local domain realm. Have a look at this above glibc... Am 24. Februar 2015 07:28:11 MEZ, schrieb Greg Zartman <gzartman at koozali.org>: >I'm working to setup Samba as a domain member to a Windows Server >directory, and I keep hitting road blocks. There's some real >hurdles in the wiki. >In a nutshell, my problem is this: I setup a Windows 2012 Essentials >domain and I ended up with zartman.local for my "domain" in Windows. >I've got a dns zone in windows server that is domain.local and my >domain for ADS membership is zartman.local. >On the Samba side, I'm not sure what needs to be specified for "realm" >"workgroup". If I set workgroup = zartman.local and >samba bellyaches and says I can't have workgroup=zartman.local. If I >workgroup = zartman, then the Samba netbios domain is "zartman" and not >The membership details look find for net ads info: >[root at cos6 ~]# net ads info >LDAP server: 192.168.0.15 >LDAP server name: pdc.zartman.local >Bind Path: dc=ZARTMAN,dc=LOCAL >LDAP port: 389 >Server time: Mon, 23 Feb 2015 22:25:32 PST >KDC server: 192.168.0.15 >Server time offset: 1 >If I check the domain with net domain, I get: >[root at cos6 ~]# net domain -U admin >Enter admin's password: > Domain name Server name of Browse Master > ------------- ---------------------------- > ZARTMAN COS6 >This isn't correct. It thinks my domain member is the master browser. >For completeness sake, here is my smb.conf: >[root at cos6 ~]# cat /etc/samba/smb.conf >workgroup = zartman >realm = ZARTMAN.LOCAL >security = ads >dedicated keytab file = /etc/krb5.keytab >kerberos method = secrets and keytab >idmap config *:backend = tdb >idmap config *:range = 2000-9999 >idmap config ZARTMAN.LOCAL:backend = ad >idmap config ZARTMAN.LOCAL:schema_mode = rfc2307 >idmap config ZARTMAN.LOCAL:range = 10000-99999 >winbind nss info = rfc2307 >winbind trusted domains only = no >winbind use default domain = yes >winbind enum users = yes >winbind enum groups = yes >winbind refresh tickets = Yes >Greg J. Zartman >Koozali SME Server >SME Server user, contributor, and community member since 2000 >To unsubscribe from this list go to the following URL and read the More information about the samba
OPCFW_CODE
Pledges Patent Non-Aggression Towards Open Source Today Insignary announces that it has joined the Open Invention Network community of patent non-aggression towards the “Linux System”, a collection of over 2,000 core Open Source packages used to power the Internet, enterprise computing, IoT, appliances, automotive and many mobile devices. This means that Insignary joins over 2,000 other organisations who recognize that Open Source is used as the platform technology that we build our infrastructure and our personal solution on in the modern world. As an innovative company with its own substantial investment in new technologies, we appreciate the value of the work done by community over the past thirty years. “Open Source has been part of my work in computing since I was a student, and I have contributed to many projects and to addressing larger issues of governance,” says Armijn Hemel, CTO of Insignary. “Joining the OIN community is an obvious way for our company to clearly position itself as a good citizen and a great partner in the broader world of Open Source and Open Data related activities.” “Insignary, like so many companies before it, is making a clear statement about its intent to work with and support the Open Source community and the terrific innovation that Open Source provides,” says Keith Bergelt, CEO of Open Invention Network. “We look forward to seeing further companies in the compliance, security and governance space joining our community in the future.” Note to Editors - Armijn Hemel - EVP & CTO, Insignary Armijn Hemel, MSc, has been using open source software since 1994 and has been active in open source related activities for a long time. Mr Hemel was a board member at NLUUG and served on the core team of gpl-violations.org for many years and helped solve hundreds of GPL incompliance cases. Mr Hemel is the primary author of the ground breaking Binary Analysis Tool and was the first to shine the light on massive defects in implementations of the UPnP protocol that left millions of routers vulnerable. Mr Hemel studied computer science at Utrecht University, The Netherlands, where he created the first prototype of the NixOS Linux distribution based on the revolutionary Nix package manager. He has published papers at scientific conferences such as USENIX HotOS, WCRE, MSR and ASE, and frequently speaks at industry conferences about license compliance, security and code provenance. - For more information about Insignary, our membership of the OIN community or our solution for monitoring the supply chain, you can contact us at firstname.lastname@example.org anytime. You can also call us at +82-2-547-7167 during Korean office hours. Insignary was founded in 2016 as a Limited Corporation in South Korea. It is backed by Korean Venture Capital and has significant business and technical talent on its executive team. We have founder team specialized in Open Source, compliance and security. Our team has extensive experience in the software industry, venture capital and business development. We are building a framework for analysing binary files. It can be used for software license compliance and for identifying security issues in binary code. Our next generation solution is called Insignary Clarity. Open Invention Network is the largest patent non-aggression community in history and supports freedom of action in Linux as a key element of open source software. Funded by Google, IBM, NEC, Philips, Red Hat, Sony, SUSE, and Toyota, OIN has more than 2,000 OIN community members and owns more than 1,100 global patents and applications. The OIN patent license and member cross-licenses are available royalty free to any party that joins the OIN community. For more information, visit http://www.openinventionnetwork.com.
OPCFW_CODE
john_wayne reacted to Mark Kaine in Risks of a google sites subdomain hosted "unblocked games" site? @john_wayne I see, thanks for the clarification... I just wanted some more info as I also find it pretty strange, and probably wouldn't trust this site personally. Although it might be harmless, who knows - though it certainly seems rather sketchy to me... Maybe you could also post in the network section, I mean this sub forum isn't completely the wrong one, but it's not frequented as much, plus I think it's rather a website / internet issue than "pc games"... Just a thought but yes I'd be definitely wary too before knowing my more about how this website / domain operates. john_wayne got a reaction from Mark Kaine in Risks of a google sites subdomain hosted "unblocked games" site? -Allowing the site to be visited / made use of. -A bunch of generic games I have never heard of. -No games get downloaded, they play in-browser. -the games are named: slope, 1v1.lol, basketball.io, run 3, no AAA titles here.. -That's part of the reason I asked about this here, there is no discussion I could find online about the legitimacy and safety of the multitude of these subfoldered sites. The intent seems to be to a "resource" for children to play ported flash-style games in-browser with not being blocked by the school's firewall - because of the domain used being a subdomain of google. My mind (because I had the additional context of what was on the screen) instead went directly to trojan-horse-esque weaponization of pages being used to infiltrate [and thereafter exfiltrate] school networks, which are in-turn connected to government networks. One such scenario could play out as a specifically crafted cookie hijacking a school computer's browser (internet explorer) because the budget gets spent on network level security, yet the computers themselves go un-updated - which is the case especially in smaller school districts. I am expecting someone in-the-know to either say "calm down boomer" or say "this is a risk that has been under the radar for too long". Reddit has mentions of these types of subfoldered unblocked pages of games for years, but only in the context of: "Here's a new URL to play games at". In moving to a zero-trust online culture, it strikes me as odd that there has been no conversation in either direction about these sites. john_wayne got a reaction from Cyracus in should i buy this to work from home? (probably not) found this ultimate peripheral on craigslist, should i buy it to work from home? that keyword cloud though.. https://jackson.craigslist.org/eld/d/clinton-live-controllable-ai-hybrid/7194703838.html would love to see what actually got delivered. admins, i could not find where to put interesting finds, so let me know if/where i need to move this to. john_wayne reacted to xAcid9 in HD 7690m 'overdrive' removed from settings after 1909 update Mine 5 but no Overdrive. john_wayne got a reaction from Ben17 in HD 7690m 'overdrive' removed from settings after 1909 update correct, MSI AB extend overclocking is not working. thanks for your help. nothing remarkable from the registry compare of the HKCU\Software\ATI\ACE\Settings key and children. making an attempt at the whole registry now. john_wayne got a reaction from Ben17 in need a vga to hdmi adapter that will output bios and boot screens I need a vga to hdmi adapter that will actually output the bios and boot screens to an hdmi monitor. this will be used for server maintenance and pc bios/boot troubleshooting - when the only monitor available is hdmi. servers only have vga out, and typically run headless. PCs: some have only vga out, or the bios/boot is only sent to the vga port (and bios doesnt provide for bios and boot to be shown on hdmi/dvi/DP port. I have bought so many adapters that do not work for this specific use case. john_wayne got a reaction from GoodBytes in Photos App doesn't properly display transparent images in Windows 10 I have enabled the preview version of Photos App and will update here once it has kicked in. FWIW, i had to fullscreen the current version of Photos for the 'settings' option to be available in the '...' menu. now that preview is enabled, 'settings' is available at any size via scroll of the '...' menu. I ran microsoft store update and windows update and nothing new was available, so apparently preview versions deliver across some other subsystem. So I'll wait as you said. Regardless of whether it works, thanks for this incredible info i didnt realize existed. john_wayne reacted to jpenguin in have a usb 3 hub you can recommend in use more than 4 months? john_wayne reacted to malon in have a usb 3 hub you can recommend in use more than 4 months? i use a tp-link uh700 myself for over a year, only issue i had with it initially wasthat the usb cable is only 3 feet, easy to solve with a good usb 3 cable extender john_wayne reacted to FezBoy in kid friendly games on steam? 1. How old are they? 2. Do you want local multiplayer or things your kid can play "with you," like play a single player game at the same time? Some Local multiplayer suggestions: Lego batman / Star Wars / all lego games are classics (these can all have voice chat, but excluding fortnite the multiplayer with VC is all just w/ steam friends, no pub servers w/ VC) john_wayne got a reaction from Fasauceome in Mighty Mouse sleeper pc (for my 8yo) It's not a knock your socks off rig, but a max out of what we started with, an education project, and my kid hasn't taken an interest in gaming on steam yet. Hp 505B MT (Narra6 mobo) Athlon II x4 645 3.1Ghz 8gb (2x4) DDR3 1333Mhz (HP bios shows 0, but windows and memtest86+ find it all) Asus GTX 750Ti 2gb GDDR5 [Aliexpress.. VGA, Hdmi, DVI] (had to cut down shroud) [mV +31, limit 100%, Core Clock +201, Memory Clock +200] Kingston V300 120gb ssd [boot] Toshiba 2.5" 1TB 5400rpm 128MB 7mm sata III [storage] WD 3.5" 1TB 5400rpm 64MB sata III [local net share] TP-link Gigabit PCIe Network card StarTech 5.1 PCI Surround Sound Mailiya PCIe USB 3.0 2+2 card (upgraded front usb to 3.0) HP 15-in-1 multi-card reader (custom front mount) Evga 450 BT power supply ZEXMTE USB Bluetooth 4.0 CSR MPOW 059 Bluetooth headset Logitech HD C270 HP DH16AAL lightscribe dvd-rw Windows 10 Pro Toshiba 46" Regza TV Looks bone-stock from the front/top/sides (other than the usb3.0 blue ports, and the little area of black paint i did to cover where i accidentally scarred the gloss front while cutting-in the card reader. The fans rarely ramp up to audible levels, the machine's day job is ingest station, local network share, ftp server for printer's scanners, and netflix. Proof you don't have to break the bank for HTPC, data handling, and casual gaming (non-ultra settings). Plays MarioKart Wii via Dolphin (I own the Wii and the disc) with WiiMote, and Minecraft each very well. Let me know what you think. Suggestions for a next tweak? john_wayne reacted to leadeater in VirtualBox is the best VM solution, change my mind I use Hyper-V if I'm using a Windows host OS, it's not something I use a lot though as I use dedicated virtualization hosts so use ESXi for that. john_wayne reacted to Electronics Wizardy in VirtualBox is the best VM solution, change my mind hyper-v has a lot more advanced features. Think the pcie passthrough, clustering, sr-iov and other featuers are hyper-v only. john_wayne reacted to Theguywhobea in VirtualBox is the best VM solution, change my mind VMWorkstation IMO is the best, although it's a paid solution. It certainly has feature parity with VirtualBox, but it also has much better performance. You can get VMware Player for free, but as far as I know it doesn't have snapshot support.
OPCFW_CODE
The Document Object Model (DOM) is an API for XML documents. It defines the logical structure of documents and the way a document is accessed and manipulated. This page shows how to create an SVG document using the DOM API. The DOM API defines an interface called DOMImplementation, which represents the bootstrap of any DOM implementation. The role of this class is to bootstrap a particular implementation of the DOM by providing a method to create a Document. Then, the concrete Document represents an XML document and also acts like a factory for the various DOM objects such as Element, Attr and Text. How to get an instance of the DOMImplementation interface depends on the DOM implementation you are using. In Batik, the DOM implementation is located in the package org.apache.batik.dom.svg and the class is named SVGDOMImplementation. The following example shows how to get a concrete DOMImplementation object. Once you have an instance of a DOMImplementation, you are not relying on Batik-specific code any more and ready to use the DOM API. Creating a Document Using the DOMImplementation, you are now able to create a Document. The following example illustrates how to create an SVG document. Note that the Batik’s DOM implementation can be used to represent either an SVG document fragment or any kind of XML document. Note that by choosing the namespace URI and the local name of the root element of SVG, you are creating an SVG document. As you have created an SVG Document, we can cast this document to an SVGDocument (defined in the org.w3c.dom.svg package) if needed. Building an SVG Document Finally, using the Document object, you are now able to construct SVG content. Note that the document created before supports both generic XML and SVG. Though the DOM implementation of Batik is an SVG DOM implementation, the SVG-specific methods that rely on the document having been rendered (particularly geometry related methods, such as SVGLocatable.getBBox) cannot be used at this point. The document can be built using DOM Level 2 Core methods. The following example shows how to create a red rectangle located at (10, 20), with a size of (100, 50) placed in a (400, 450) SVG canvas: The example given constructs a document equivalent to parsing the following SVG file: Creating a Document from an SVG File With Batik, you can also create an SVG DOM tree from a URI, an InputStream, or a Reader, using the SAXSVGDocumentFactory. The following example illustrates how to create an SVG document from a URI using the SAXSVGDocumentFactory class. As you have created an SVG Document, you can cast this document to an SVGDocument (defined in the org.w3c.dom.svg package) if needed. Rendering an SVG Document Batik provides several ways to use an SVG DOM tree. Two modules can be immediately used to render your SVG document. The JSVGCanvas is a Swing component that can display SVG document. A SVG document can be specified using a URI or an SVG DOM tree (using the setSVGDocument method). For futher information about the JSVGCanvas, see the Swing components module documentation. The ImageTranscoder is a transcoder that can take a URI, an InputStream or an SVG DOM tree and produces a raster image (such JPEG, PNG or TIFF). By creating a TranscoderInput object with the SVG DOM tree, you will be able to transform your SVG content to a raster image. For futher information, see the transcoder module documentation.
OPCFW_CODE
Data science is all about making sense from different data types and exploring, problem-solving, and getting useful information from different data sets. When it comes to data science, there is no better alternative for Python programming language. Python data science is becoming a trend in the current IT world, and many professionals are looking to learn Python and enter into the field of data science. In this post, let us understand how one can learn Python and become a good data scientist. Reasons to learn Python for Data Science. Python is one of the most loved programming languages in the IT industry. The large and passionate community backs it. It has a loyal following in the data science profession too. The reason why it is preferred over other programming languages is plenty. Many people determine the quality, performance, and simplicity of the programming languages with the ‘hello world’ program. Python carries out this function pretty well. Simplicity is one of the great strengths of Python, along with other benefits. It can perform the same tasks with lesser code when compared with others. It makes implementing the process quick and efficient. The biggest strength of Python is a strong and passionate community. It also has a vibrant data science community, which means you can find plenty of free resources, code snippets, tutorials, and experienced people to solve your problems and questions. The libraries are another notable thing of Python. It makes the data analytics job more accurate and faster. It comes with a separate package for machine learning and data analysis. These things make data science programs efficient, accurate, and easy to use. However, one big factor compels many people to jump into the data science space with Python as a programming language. That is the demand. The data generated from every business is reaching an all-time high, and it is expected to grow exceptionally in the next few years. Hence, it is the best time to invest time and money in learning data science with the right programming language. The demand is growing more, and talented professionals can earn lucrative pay. How can you learn Python efficiently and quickly? Before understanding the best practices to learn Python for data science, it is important to know what things are not necessary. You don’t need a degree in C.S to start your learning path as long as you are good at writing logical codes using R or Python. Python is a comprehensive thing and data science is a part of it, so by learning a small part, you can master data science with Python. Besides, you no need to memorize the syntax, rather you can learn to become initiative while writing code. Learn the fundamentals Knowing the basics is the first step in mastering any of the topics. If you know how it works and the core concepts of Python, you can easily relate the logic behind every syntax and your programming becomes a smooth process. Spend your time understanding the core concepts of Python. You can refer to numerous resources to learn the fundamentals of Python. You can find a useful resource online as well as in the books. Know the uses of libraries Python has a large collection of libraries to execute numerous things. When it comes to data science, there are a huge set of libraries to work with. Libraries are the bundles of preloaded objects and functions that can be used in your program to save time rather than programming from scratch. The essential libraries are NumPy, Matplotlib, Pandas, Seaborn and Scikit-learn. The knowledge of these libraries makes your job easier and help to understand the working of Python in a better way.
OPCFW_CODE
Interest is calculated on savings accounts in Mifos X based on options set when creating the savings product. The administrator can set the frequency of interest calculation and posting on each savings product to configure how often interest is posted to a savings account. The start date for interest calculation and posting, and the reference for the time period is the start of the fiscal year. Interest is posted on the last day of the time period (as per the frequency of interest posting to accounts). Active account = the account with status as active Time period of interest calculation = the period of time over which the interest is being calculated. For example, if time period for interest calculation is 1 day, interest for the account is being calculated per day. If time period for interest calculation is 1 month, the interest for the account is being calculated at the end of every month. Frequency of interest posting = when the interest is actually posted to the account (and can be seen in the UI). For example, if frequency of interest posting is 1 month (and time period for interest calculation is per day), even though the interest is calculated daily in Mifos, the interest is not actually posted to the account until the end of the month. Fiscal year = 12 month period that we use to determine interest calculation and interest posting. Currently these periods are calculated from the beginning of the fiscal year. In Mifos X, Fiscal Year is hard-coded. First day is January 1st and the last day is December 31st. Deposit = money added to an account. Withdrawal = money deducted from an account. When referring to deposits and withdrawals, it is from the whole day's worth when talking in relation to calculations. For example, if you deposit $10 and withdraw $20 on the same day, in calculations, it is considered a $10 withdrawal that day. For one period interest calculation: A = P(1+r/n) P = principal amount (initial amount) r = annual rate of interest (as a decimal) A = amount of money accumulated (including interest) n = number of times the interest is compounded per year Periodic intervals: can be set by number of days or months. The average balance is the average of all the daily balances for that interest calculation period. A = P(1+(r/n)) A = 1200(1+(.08/4)) Requirements and Calculations General Interest Calculation and Posting Requirements Configuration Options DigitsAfterDecimal and CurrencyRoundingMode are used to set the precision of values in Mifos DB and UI Each interest calculation period, Mifos rounds to precision set in Configuration Settings and stores this value in DB, and subsequently displays in UI when interest is posted. Extra precision is not saved. DB and UI values match. Caveats - In cases of extremely small balances or low interest rates, this could amount to 0 interest earned depending on precision set in DigitsAfterDecimal.
OPCFW_CODE
import visa import time import sys from time import sleep def printHelp(): print ("Try ->logCurrent.py iLog.csv 172800 .5") print ("\tiLog.csv = output file for current data (file will be over written)") print ("\t172800 = number of readings to take") print ("\t.5 = seconds between readings") def main(lgFile, numSamples, delayBetweenSamples): rm = visa.ResourceManager() #print('Connected VISA resources:') #print(rm.list_resources()) dmm = rm.open_resource('USB0::0x1AB1::0x09C4::DM3R192701216::INSTR') #dmm.timeout = 10000 print("Instrument ID (IDN:) = ", dmm.query('*IDN?')) print("Setting current range to 300ma", dmm.write(":MEASure:CURRent:DC 3")) print("Reading current range", dmm.query(":MEASure:CURRent:DC:RANGe?")) f = open(lgFile,'w') fStr = "Time, DC Current, Avg I\n" f.write(fStr) numSamples = int(numSamples) delayBetweenSamples = float(delayBetweenSamples) print("Poll rate in seconds = ", delayBetweenSamples) print("Number of Samples = ", numSamples) print("Output csv log file = ", lgFile ,"\n\n") runTotal = 0 ampHourEst = 0 print("count", " Seconds Count ", "DC Current", "Estimated Amp Hours", sep="\t|\t") print("----------------------------------------------------------------------------------") for x in range(1, numSamples): iStr = dmm.query(":MEASure:CURRent:DC?") iStr = iStr.replace("\n", "") iFlt = float(iStr) runTotal = runTotal + iFlt ampHourEst = runTotal / x now = time.time() now = int(now) print(numSamples - x, now, iFlt ,ampHourEst, sep="\t|\t") fStr = str(now) + "," + str(iFlt) + "," + str(ampHourEst) + "\n" f.write(fStr) sleep(delayBetweenSamples) f.close() x = len(sys.argv) print("argv count = ", x) if x == 4: logFile = sys.argv[1] samples = sys.argv[2] delay = sys.argv[3] print(logFile, samples, delay, sep=" | ") main(logFile, samples, delay) else: printHelp()
STACK_EDU
Microsoft Surface Hub 2S is the new all-in-one device created and conceived with one main objective: collaboration. This product is in fact designed to bring together the best of Microsoft solutions for productivity, such as Windows 10, Microsoft Teams, Office 365, Microsoft Whiteboard and Intelligent Cloud in a single device. Microsoft Surface Hub 2S features and technical specifications The new member of the Surface family, Microsoft Surface Hub 2S, offers improved performance in a thinner, lighter and more versatile design than the other models in the series. The device is 40% lighter in weight than the previous version and a 60% thinner display. For these and other reasons, Microsoft Surface Hub 2S is perfect for any environment, from classic meeting rooms to modern and compact huddle rooms. The product integrates a 50″ 4K multi-touch display and offers a large canvas to collaborate better and use the touch-screen with maximum comfort. The purpose of the Microsoft Surface Hub 2S is to offer the flexibility to get together wherever you are in order to collaborate, develop a project and do your work in the best way. Given the purposes, the product offers, in addition to a slim design, a 4K video cameraexcellent sound quality and far-field technology for the microphone to ensure the best collaboration for all participants. So how do you move a 50″ device from one room to another easily? Simple, thanks to Roam Mobile Stand (designed by partner Steelcase) and alla APC Charge Mobile Battery, two essential accessories for transport and feeding. In short, it will be easy to move the device from one place to another without interrupting the work or project in progress, this also applies to Skype video calls over WiFi. “The business world, increasingly interested in recent years by digital transformation processes that revolutionize its assets, needs innovative spaces and solutions to allow teams to collaborate and give their best in the design of winning ideas. As research by Steelcase has shown, 80% of employees consider teamwork essential to doing their best work. However, with the spread of remote working and the proliferation of global teams, it is increasingly difficult to physically be in the same room. Just think that 70% of professionals said they work remotely at least once a week and 53% do it at least half the days of the week. With Surface Hub 2S, we intend to offer companies a cutting-edge technological tool, capable of optimizing teamwork, in any context, both physical and virtual”, commented Elvira Carzaniga, Business Group Lead Surface of Microsoft Italy. However, it does not end here, because Microsoft’s commitment in the business sector continues apace: in fact, two new devices in the Surface Hub line have already been announced, dedicated to spaces that need displays capable of offering excellent interaction via touch or via pen, as well as an alternative version of the 85″ Surface Hub 2S expected in 2020. Microsoft Surface Hub 2S price and release date The device will initially go on sale in the United States as of June 2019 at an MSRP of $8,999.99. His arrival in Italy is scheduled for the summer of 2019 but, starting from May 1st, it will be possible to book the devices from Ayno and Insight authorized dealers.
OPCFW_CODE
Supernacularnovel 《Birth of the Demonic Sword》 – Chapter 1866 – 1866. Perspective unsuitable groovy -p2 simon eichelkatz the patriarchs Novel–Birth of the Demonic Sword–Birth of the Demonic Sword birth control ring Chapter 1866 – 1866. Perspective mate potato Her Prairie Knight “How could you decline the simpler route?” The orange amount extended its eager rant. “Will you be making your journey trickier on intent? That system has a crystal clear minimize. We already screened it.” Utter ability generally was the best thing. It didn’t make any difference in which it originated in given that experts can use it to follow their set goals. Even so, every thing has become more technical if they thought of their farming quest. Vicory may be in appearance, but that didn’t subject. The most powerful ally acquired shown up, but Noah didn’t care and attention. He didn’t brain obtaining assist nor getting advantages from effective existences. Nevertheless, attaining that army would put his very path in peril, and he couldn’t enable it. “But!” The orange mineral exclaimed. “You may realize effectiveness using a single final decision. It’s not about conserving. It’s about taking out the previous difficulty!” twilight of the gods anime “It won’t be my success,” Noah responded within the easygoing strengthen. “This will assist using one section and injure for the other. I’m pleased to compromise one thing about myself to get to bigger realms but not getting the entirety of my life at risk. Heaven and World is actually another wall structure on my small path.” That wouldn’t transpire if they applied overseas strengths to accomplish their feats. Their existences may also suffer from simply because obtained recognised a real large assist to address among their finest difficulties. It could build doubts in their self-assurance and damage them permanently. That wouldn’t take place as long as they applied foreign capabilities to do their feats. Their existences may also go through because they got recognized a real large aid to manage certainly one of their very best concerns. It might build issues with their trust and mess up them permanently. 10 wishes for birthday All of them possessed different targets that would be easy to seize if Paradise and The planet weren’t in how. Nevertheless, these rulers were actually also one of many hazards that forced them to keep working harder. Divine Demon experienced tested many times how conquering highly effective opponents helped bring added benefits, as well as every life in that stage was conscious of which feature. to catch a cheater reddit The orange amount didn’t understand what to express, nonetheless its defensive components triggered on its own before beginning. It looked the older Paradise and Globe want to have the team pa.s.s, as well as professionals didn’t hesitate to enter it. “It won’t be my glory,” Noah replied in the easygoing color. “It can help on a single part and hurt around the other. I’m prepared to give up one thing about myself to attain better realms however not applying the entirety of my living in peril. Paradise and Earth is only another wall surface on my own pathway.” 862 area code The decreased piece of Heaven and The planet didn’t fully grasp their point. It had been very long since they survive considered themselves for a solo living. They merely spotted their version being a community, therefore they battled to receive what Noah as well as the other people have been declaring. Even so, the scenario awakened remembrances inside all those false rulers and made them remember several of the thoughts that they had lost after to become a barrier that controlled the plane. “I choice you have ama.s.sed decent food throughout this era,” Alexander reported. “Am I Allowed To visit your stashes?” “We don’t realize,” The severed bit of Paradise and The planet exclaimed, and Noah sighed while choosing to reveal himself far better. Everything depended on whatever they d.e.s.i.r.ed, but none of Noah’s companions built their lifetime with their enmity with Paradise and The planet. Even Noah only saw the rulers as simple adversaries on his countless direction. The dropped piece of Heaven and Planet didn’t comprehend their stage. It had been too long simply because previous thought of themselves as being a single lifetime. They merely noticed their release to be a planet, so they struggled to obtain what Noah and also the others have been indicating. Even so, the picture awakened recollections inside the false rulers and made them remember many of the feelings that they had lost after transforming into a obstacle that regulated the airplane. “Have you coaching locations in this location?” Divine Demon asked. “I feel the need to stretch out.” “How can you decline the easier route?” The orange chunk continuing its eager rant. “Are you presently trying to make your journey more difficult on objective? That system possesses a crystal clear limit. We already screened it.” “I guess you might have ama.s.sed great foods throughout this period,” Alexander mentioned. “May I see the stashes?” Naturally, Noah along with the many others wouldn’t leave behind Heaven and The planet in existence just to have a potent foe to conquer and gives even more appeal on their existence. Nevertheless, they didn’t refute that this feat will help them complete their law. Each of the pros required to utilize the immense area or the separate piece of Paradise and The planet because of their possess positive aspects. Those with paths that embraced related objectives naturally banded up together and anxiously waited for any result through the orange chunk. “I bet you may have ama.s.sed decent food throughout this era,” Alexander stated. “Can I visit your stashes?” The orange amount didn’t understand what to state, however its protective mechanisms initialized on its own before opening. It looked which the outdated Heaven and Planet needed to allow the party pa.s.s, plus the pros didn’t be afraid to get in it. “What would I even gain from that?” Noah inquired as Caesar’s phrases resounded on his thoughts. “I am going to eradicate Paradise and Planet, however i won’t practice it by wielding an electric power that doesn’t fit in with me. My path is much more important than lifestyle, dying, right, and improper.” Almost everything relied on what they d.e.s.i.r.ed, but nothing of Noah’s companions launched their presence in their enmity with Paradise and World. Even Noah only found the rulers as mere foes on his countless direction. “I’d take a look at your understanding should you don’t brain,” Emperor Elbas put in once Noah was more than. Naturally, Noah as well as the other folks wouldn’t keep Heaven and Planet in existence just to enjoy a impressive enemy to conquer and present far more importance for their existence. However, they didn’t reject the fact that job will help them total their laws. Api di Bukit Menoreh “Are there coaching places during this metropolis?” Divine Demon requested. “I feel the need to expand.” The other experts couldn’t assist to check unique thoughts at that arena. The orange portion experienced clearly available the army to Noah, nevertheless the problem was starting a possibility for the kids. They all recognized that they can could grab the army if they played out their cards accurately. Yet still, reluctance, concerns, and problems inevitably spread out in their thoughts at these ideas. “It won’t be my victory,” Noah replied in the easygoing strengthen. “This will assist using one aspect and injure for the other. I’m willing to sacrifice some thing about myself to get to better realms however, not adding the entirety of my living at an increased risk. Paradise and Earth is only another wall surface on my small path.” Utter strength generally was the best thing. It didn’t subject exactly where it has come from as long as experts could use it to focus on their goals. Even so, every thing became more complicated every time they thought of their cultivation process. Noah along with his buddies weren’t existences who simply want to attain the maximum. They also d.e.s.i.r.ed to carry out that accomplishment by themselves terminology to present value on their laws.
OPCFW_CODE
Capturing Spatially Varying Anisotropic Reflectance ... · PDF file Figure 1: Our acquisition... Embed Size (px) Transcript of Capturing Spatially Varying Anisotropic Reflectance ... · PDF file Figure 1: Our acquisition... Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis Inria ; Université de Grenoble/LJK ; CNRS/LJK Imari Sato † National Institute of Informatics Nicolas Holzschuch ‡ Inria ; Université de Grenoble/LJK ; CNRS/LJK 1. Capture 2. Analysis 3. Rendering 0 π/2 π 3π/2 2π Figure 1: Our acquisition pipeline: first, we place a material sample on our acquisition platform, and acquire photographs with varying incoming light direction. In a second step, we extract anisotropic direction, shading normal, albedo and reflectance parameters from these photographs and store them in texture maps. We later use these texture maps to render new views of the material. Reflectance parameters condition the appearance of objects in pho- torealistic rendering. Practical acquisition of reflectance parame- ters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples re- quired. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small num- ber of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, sur- face normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level. Index Terms: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture; I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture—Reflectance; Material reflectance properties encode how an object interact with incoming light; they are responsible for its aspect in virtual pictures. Having a good representation of an object material properties is essential for photorealistic rendering. Acquiring material properties from real objects is one of the best way to ensure photorealism. It requires sampling and storing mate- rial reactions as a function of incoming light and viewing direction, a time-consuming process. Accurately sampling and storing a sin- gle material can take several hours with current acquisition devices. Anisotropic materials show interesting reflection patterns, vary- ing with the orientation of the material. Because the amount of light reflected depends on both the incoming light and observer orientation, the acquisition process is even more time-consuming for anisotropic materials. Similarly for spatially varying materials, where material properties change with the position on the object. Measured materials require large memory storage, related to the number of sampled dimensions (up to 6 dimensions for spatially- varying anisotropic materials). For this reason, people often use analytic material models for rendering. Analytic BRDFs can also be authored and importance sampled more efficiently. Fitting pa- rameters for these analytic materials to measured data is difficult; it is still an ongoing research project. In this paper, we present a new method for acquisition of spa- tially varying anisotropic materials. We extract material parame- ters from a small set of pictures (about 20), with different lighting conditions but the same viewing condition. We use Fourier anal- ysis to reconstruct material properties from the sub-sampled sig- nal. We are able to extract diffuse and specular reflectance, direc- tion of anisotropy, shading normal and reflectance parameters in a multi-step process. Our algorithm works with the anisotropic Cook- Torrance BRDF model, with several micro-facet distributions. The choice of the micro-facet distribution is left to the user, our al- gorithm automatically extracts the corresponding parameters. We make no assumptions about stationarity or regularity of the mate- rial, and recover parameters for each pixel independently. The spa- tial resolution of our algorithm is equal to the spatial resolution of the input pictures. Our algorithm is described in Figure 1: we place a material sam- ple on a rotating gantry, acquire a set of photographs for varying incoming light directions. From these photographs, we extract a set Graphics Interface Conference 2016 1-3 June, Victoria, British Columbia, Canada Copyright held by authors. Permission granted to CHCCS/SCDHM to publish in print and digital form, and ACM to publish electronically. of maps depicting reflectance parameters such as albedo, shading normal or anisotropic direction. These parameters are then used to generate new pictures of the material. The remainder of this paper is organized as follows: in the next section, we briefly review previous work on material acquisition, focusing on acquisition of spatially varying or anisotropic materi- als. In section 3, we present our acquisition apparatus. In section 4, we describe our method for extracting reflectance parameters from the acquisition data. In section 5 we present acquisition results and evaluate the accuracy of our method. We discuss the limitations of our algorithm in section 6 and conclude in section 7. 2 RELATED WORK Acquiring material properties is time consuming, as it requires sam- pling over all incoming and outgoing directions. For spatially uni- form materials, researchers speed up the acquisition process by ex- ploiting this uniformity: a single photograph samples several di- rections, assuming the underlying geometry is known. Matusik et al. used a sphere for isotropic materials. Ngan et al. ex- tended this approach to anisotropic materials using strips of mate- rials on a cylinder. In these two approaches, the camera is placed far from the material samples so we can neglect parallax effects. Filip et al. reverted the approach for anisotropic materials: they used a flat material sample, placed the camera close and exploited parallax effects for dense direction sampling. These approaches do not work for spatially varying materials. A common technique is to precompute material response to illumina- tion, then identify materials on their response to varying incoming light. Gardner et al. a moving linear light source, combined with precomputed material response to identify spatially varying materials and normal maps. Wang et al. identifies similar ma- terials in the different pictures acquired and combines their infor- mation for a full BRDF measure. Dong et al. extend the ap- proach in a two-pass method: they first acquire the full BRDF data at few sample points, then use this data to evaluate the BRDF over the entire material using high-resolution spatial pictures. Compared to these methods, we do not make any assumptions about the ho- mogeneity of the material, and recover material response indepen- dently at each pixel. Thus, we can acquire reflectance properties even for materials with high-frequency spatial variations (see Fig- ure 1). Ren et al. begin by acquiring full BRDF information for some base materials. Samples of these materials are placed next to the material to measure, and captured together. By comparing the appearance of the known and unknown materials, they get a fast approximation of spatially varying reflectance. Their method is limited to isotropic reflectance. Another promising area of research is to record material response to varying incoming light. Holroyd et al. used spatially mod- ulated sinusoidal light sources for simultaneous acquisition of the object geometry and isotropic material. Tunwattanapong et al. used a rotating arm with LEDs to illuminate the object with spher- ical harmonics basis functions, allowing simultaneous recovery of geometry and reflectance properties. Aittala et al. combined two different pictures of the same material (one with flash, the other without), then exploited the material local similarity to recover the full BRDF and normal map at each pixel. Their algorithm is limited to isotropic materials. Empirical reflectance models represent the response by a mate- rial in a compact way. The most common models for anisotropic materials are anisotropic Ward BRDF and the anisotropic Cook-Torrance BRDF . The latter has a functional parameter, the distribution of normal micro-facets. Early implementations used a Gaussian distribution; in that case the Ward and Cook-Torrance models are very close. Recent implementations use different prob- ability distributions. Brady et al. used genetic programming Figure 2: Our acquisition platform: the sample and camera are static and aligned with each other, the light arm rotates around them. Arm position and picture acquisition are controlled by a com- puter. to identify the best micro-facet distribution; they found that many measured materials are better approximated with a distribution in e−x instead of a Gaussian. 3 ACQUISITION APPARATUS We have set up our own acquisition apparatus (see Figure 2): a flat material sample is placed on a console. The camera is vertically aligned with the sample normal, at a large distance. A rotating arm carrying lights provides varying lighting conditions. The axis of rotation for the arm is equal to the view axis for the camera. A computer controls the arm position, which light sources are lit and the camera shooting, so t
OPCFW_CODE
- Dec 24, 2018 My suspicion is that there's a degree of internal factionalism or rivalry going on, and the leaders of the teams likely have a mentality of "my work is best, even if I could work with this other team, why would I when the way I do things is better?"Even with separate teams it's staggering, like they refuse to even look at the other games. Consider Stellaris, for example - a warscore system that is needed to do the exact same shit as in EUIV or CK2... except perpetually broken. How? Why? It's not like they'd need to port stuff from those games directly, just look at how they do it! Then they come up with stupid shit like the whole "attrition" mechanic instead of the standard ticking warscore, and you get ridiculous outcomes.It's because they have separate teams working on each game, rather than the whole studio working on one game at a time. There's probably some amount of crossover, but for the most part it's just one group of people working on HoI, and a different group of people working on CK, etc. This is partly why their engine was so fragmented and non-standardized prior to the Jomini subsystem (and to a degree, still is), and it's also why many of the games have wide variations in performance. This also likely means that most of the game design is separate for each game, and it's probably why every game has one thing it really focuses on and develops that the others all neglect (characters for CK, pops for Vicky, warfare for HoI... not sure what EU's focus is, diplomacy maybe? it feels like the most "generalist" game). Like in theory if you come up with a system for characters and internal sub-national governments in CK2 then you should be able to have that SAME system in Victoria 2 or 3. The reason they don't isn't that it would be too complicated, it's that it's a different team working on the system and they're on different versions of the engine and stuff can't be ported without a lot of effort even if they did want to collaborate to that extent, which they likely don't.Paradox is pretty talented at making everything shittier with each iteration. It's like they reinvent the wheel with each game, but somehow end up with worse and worse wheels. It's not just the dogshit UI or the map that seems to get uglier and uglier with each game, but also things like diplomacy – how the fuck do they keep remaking that same fucking system over and over and over again with every game, every time running on the exact same concepts, yet always come out with terrible dogshit that they need to iterate over, just to end with something that feels like a discount version of EUIV's? Is this some sort of an ego issue? "No way I'm copying stuff from past games, I will make everything myself, and better!" This is mostly a talk about how they implemented multi-threading, but there's some bits here and there that reveal the fragmented nature of the studio.
OPCFW_CODE
GridBagLayout Not Keeping Order I am trying to make a simple two column view in a particular order. It should look like this : Two Static headers in the top row, and 5 panels per column underneath each The result I am getting however is a blank table on the left and all the details lined up horizontally on the right. The problem is the details are loaded live from the internet and refresh every 8 seconds, so I would need to update each individual cell as the data refreshes (Which is why I'm not using GridLayout) The code loops infinitely (constantly bringing up up-to-date data till exit) I can't initialize all the JPanels beforehand as it takes a minimum of 6 seconds to get each data point from online. Could you help me sort out the order? Thanks import NeuralNet.NeuralNet; import javax.swing.*; import java.awt.*; public class MWindow { public MWindow(String[] c){ JFrame mainFr = new JFrame("Current Predictions"); mainFr.setSize(800, 800); mainFr.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); mainFr.setVisible(true); JPanel p = new JPanel(new GridBagLayout()); GridBagConstraints cns = new GridBagConstraints(); cns.fill = GridBagConstraints.HORIZONTAL; cns.insets = new Insets(5,5,15,15); mainFr.add(p); JTextArea man = new JTextArea("MANUFACTURING 5"); cns.gridx = 0; cns.gridy = 0; cns.weightx = 1.0; cns.weighty = 1.0; cns.fill = GridBagConstraints.BOTH; p.add(man, cns); JTextArea inter = new JTextArea("INTERNET 5"); cns.gridx = 1; cns.gridy = 0; cns.weightx = 1.0; cns.weighty = 1.0; cns.fill = GridBagConstraints.BOTH; p.add(inter, cns); JPanel aapl = new JPanel(); JPanel msft = new JPanel(); JPanel intc = new JPanel(); JPanel ibm = new JPanel(); JPanel tsla = new JPanel(); JPanel fb = new JPanel(); JPanel goog = new JPanel(); JPanel yhoo = new JPanel(); JPanel twtr = new JPanel(); JPanel amzn = new JPanel(); p.setBackground(Color.white); mainFr.setBackground(Color.white); while (true) { for (String cmp : c) { JPanel stkPanel = new JPanel(); stkPanel.setBackground(Color.white); if (!(cmp.equals("INTER5") || cmp.equals("MANU5") || cmp.equals("ALL"))) { Asset a = new Asset(cmp); NeuralNet n = Functions.loadNN(cmp); NeuralNet nA = Functions.loadNN("ALL"); NeuralNet n5; if (cmp.equals("MSFT") || cmp.equals("AAPL") || cmp.equals("INTC") || cmp.equals("IBM") || cmp.equals("TSLA")) { n5 = Functions.loadNN("MANU5"); } else if (cmp.equals("TWTR") || cmp.equals("YHOO") || cmp.equals("GOOG") || cmp.equals("FB") || cmp.equals("AMZN")) { n5 = Functions.loadNN("INTER5"); } else { System.out.println("ERROR"); n5 = n; } double pred = n.PredictRows(Functions.formatData(a)); double pred5 = n5.PredictRows(Functions.formatData(a)); double predA = nA.PredictRows(Functions.formatData(a)); JTextArea stkPred = new JTextArea(); stkPred.setText("Stock: " + cmp + "\nCurrent Price: " + "$" + a.getCurrPrice().toString() + "\nPrediction 1: " + Double.toString(pred) + "\nPrediction 2: " + Double.toString(pred5) + "\nPrediction 3: " + Double.toString(predA)); stkPred.setFont(new Font("Helvetica", Font.BOLD, 15)); int pr = Functions.calcBS(pred, pred5, predA); JTextArea act = new JTextArea(); if (pr == -1) { act.setText("SELL"); act.setForeground(Color.red); } else if (pr == 1) { act.setText("BUY"); act.setForeground(Color.green); } else { act.setText("HOLD"); act.setForeground(Color.orange); } act.setFont(new Font("Helvetica", Font.BOLD, 15)); stkPanel.add(stkPred); stkPanel.add(act); switch (cmp) { case "MSFT": msft.removeAll(); msft.add(stkPanel); msft.revalidate(); msft.repaint(); cns.gridx = 0; cns.gridy = 2; cns.weightx = 1.0; cns.weighty = 1.0; cns.fill = GridBagConstraints.BOTH; p.add(msft, cns); case "AAPL": aapl.removeAll(); aapl.add(stkPanel); aapl.revalidate(); aapl.repaint(); cns.gridx = 0; cns.gridy = 1; cns.weightx = 1.0; cns.weighty = 1.0; cns.fill = GridBagConstraints.BOTH; p.add(aapl, cns); case "INTC": intc.removeAll(); intc.add(stkPanel); intc.revalidate(); intc.repaint(); cns.gridx = 0; cns.gridy = 3; p.add(intc, cns); case "IBM": ibm.removeAll(); ibm.add(stkPanel); ibm.revalidate(); ibm.repaint(); cns.gridx = 0; cns.gridy = 4; p.add(ibm, cns); case "TSLA": tsla.removeAll(); tsla.add(stkPanel); tsla.revalidate(); tsla.repaint(); cns.gridx = 0; cns.gridy = 5; p.add(tsla, cns); case "TWTR": twtr.removeAll(); twtr.add(stkPanel); twtr.revalidate(); twtr.repaint(); cns.gridx = 1; cns.gridy = 4; p.add(twtr, cns); case "FB": fb.removeAll(); fb.add(stkPanel); fb.revalidate(); fb.repaint(); cns.gridx = 1; cns.gridy = 1; p.add(fb, cns); case "AMZN": amzn.removeAll(); amzn.add(stkPanel); amzn.revalidate(); amzn.repaint(); cns.gridx = 1; cns.gridy = 5; p.add(amzn, cns); case "GOOG": goog.removeAll(); goog.add(stkPanel); goog.revalidate(); goog.repaint(); cns.gridx = 1; cns.gridy = 2; p.add(goog, cns); case "YHOO": yhoo.removeAll(); yhoo.add(stkPanel); yhoo.revalidate(); yhoo.repaint(); cns.gridx = 1; cns.gridy = 3; p.add(yhoo, cns); } p.add(stkPanel); p.revalidate(); } } try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } } } } First, I strongly suggest you have a look at Concurrency in Swing, as you are violating the single threaded nature of Swing. I'd suggest you then have a look at Worker Threads and SwingWorker for a possible solution. You have a lot going on in your code, lots of things getting added and removed and created, which is just plain confusing. The information you're generally updating isn't that dynamic, the data is, but the way it's presented isn't. So, what I would suggest doing instead is, create a single custom class which encapsulates the information you want to display, this becomes the model, then create a single custom class which can display the information in the format you want. I would then create a number of these components and place them within the container in whatever fashion you want, maybe using a Map to bind them (so you can look up the component for a given data source when it changes). Then, I would simply pass the data/model to the appropriate component when it changes and it could simply update the display. Based on what I can tell from your code, a JTable would probably go along way, as well as series of JLabels/JTextFields, making it much easier to update the UI from. It would also make it easier to add/remove data sources
STACK_EXCHANGE
As you've researched how best to block ads (and also the trackers that frequently come with them), you've probably come across mentions of uBlock Origin and Pi-Hole. uBlock Origin is the gold standard; but Pi-Hole has numerous unique benefits as well. So, which blocking solution should you use? What if we told you... you should probably look at using both if possible? Don't worry. Let us explain. NOTE: Not to be confused with the commercial option, "uBlock!" uBlock Origin is a free and open source adblocker - but more specifically, a wide-spectrum tracker blocker - for browsers. It's available as a plugin for most browsers; you can install uBlock Origin as an add-on for Gecko based browsers like Firefox or as an extension for Chromium based browsers. uBlock Origin is very often highly recommended in the privacy community for tracker blocking on the browser level. In fact, you could probably run only uBlock Origin in your browser and receive the most comprehensive tracker blocking available. You'll find that privacy browsers, such as Librewolf, ship out with uBlock Origin pre-installed as the default ad and tracker blocking solution. You can sometimes find some version of uBlock Origin being used for browsers that feature their "own" ad and tracker blocking capabilities. Pi-Hole is a free and open source project that allows you to turn your (Linux) device into a local filtered DNS for your entire network. Simply put, it allows you to block ads and malicious domains on your network. If you're curious about how to install Pi-hole, then feel free to check out the avoidthehack guide for installing and configuring Pi-hole . While you're at it (or if you already have Pi-hole installed), be sure to check out our curation of the best Pi-hole blocklists! In the general privacy community, you'll find Pihole mentioned frequently for those looking for a network adblocking solution. It's frequently mentioned alongside other compatible solutions such as Unbound, a self-hostable encrypted DNS client, and other trusted DNS service providers such as Quad9 or NextDNS. The biggest difference between uBlock Origin and Pi-Hole is the scope of each solution's blocking abilities. - Pihole is a network-wide ad and tracker blocker. When properly set up, Pihole provides a "service" to the entirety of the network, blocking ads and trackers for any device connected to the network that Pihole sits on. - On the other hand, uBlock Origin is limited to the device on which its installed. Specifically, its a browser plugin, so it's actually limited to whatever web browser its installed on. Think of it like this: one installation of Pi-Hole can provide blocking protection for more than one device whereas one installation of uBlock Origin can only provide blocking for the browser of the device its installed on. This does not make uBlock Origin inferior in any way; after all, it's a browser plugin with limited overall scope. In addition to their respective ad and tracker blocking abilities, both uBlock Origin and Pi-Hole offer additional functionality beyond their primary functions. Given the broad difference in scope of their blocking capabilities, these additional functions differ enough to note here. Pi-Hole can act as DHCP server for your network, assigning the leases for the internal IP addresses on your network. Use of blocklists Both uBlock Origin and Pi-Hole perform the bulk of their ad and tracker blocking by using blocklists. These blocklists frequently contain known hosts (read: domains) that host undesired content/advertisements/tracking methods. Hosts for ads and trackers are numerous, for lack of better terms. Because of the seemingly endless supply of ad and tracker-related servers, some blocklists may even be "themed" or "niche." For example, one blocklist might be dedicated to blocking hosts related to SmartTV ad/tracker domains whereas another might be dedicated to blocking content that is not safe for work (NSFW). Pi-Hole and uBlock Origin are easy-to-use for more average users, but boast a ton of customization options that many advanced users find beneficial. These customization options allow anyone to truly tailor the software to their specific wants and needs; the versatility fits into a wide range of threat models common to the end-user. Much of the customization within Pi-Hole and uBlock Origin lie in the additional functionalities that are unique to both. However, a lot of customization also resides in the settings of each piece of software. For example, uBlock Origin's "advanced options" allows you to tweak of the nitty-gritty of the plugin itself. On the other hand, PiHole allows you to to apply blocklists to specific user groups or devices (assuming you allow Pi-Hole to be your network's DHCP server.) Pi-Hole and uBlock Origin are both easy to use. Pi-Hole is a more complicated installation process, but after everything is said and done it's relatively easy to manage while using the Web Interface. With Pi-Hole's intuitive web interface, you can easily manage blocklists. This really helps users who either are 1) not familiar with the CLI or 2) prefer not to use the CLI for everything. uBlock Origin is super easy to install - it literally just requires you to install the extension (Chromium-based browsers) or add-on (Gecko/Firefox), which can be found in the Chrome Web Store or the Mozilla Add-on Website. Alternatively, you can opt for installing the plugin manually. Though manual installation is more of a process on Chromium, avoidthehack does have a nice write up on how to install Chromium extensions manually. The short answer is: Both. Yes, you should use both Pi-Hole and uBlock Origin. Ideally, it isn't a question of "either, or," because using Pi-Hole and uBlock Origin in tandem provides you with comprehensive tracker and ad blocking: uBlock Origin is a pretty comprehensive wide-spectrum blocker all on its own; its effectiveness can actually help your Pi-Hole installation by not working it so hard. This is because when uBlock Origin blocks something, this request doesn't get "seen" by Pi-Hole. However, as we noted earlier, uBlock Origin is a browser plugin. Meaning that it can't provide adblocking outside of the browser itself. This may not seem like a large issue but there's a high chance you also run various apps outside the browser; these apps can perform their own DNS queries, and if they serve ads or are "phoning home," then uBlock Origin is effectively helpless to stop this activity. Ideally, this where PiHole would step in. Pi-Hole essentially sits on your network as a filtered DNS server; it can catch these requests and stop them. Furthermore, if something "slips through" Pi-Hole, hopefully, if you're using a trusted encrypted DNS resolver that provides domain blocking as the upstream, it could be blocked there instead. Since Pi-Hole sits on the network it can also block ad and tracker-related DNS queries from other devices too; even devices that traditionally don't have web browsers. It can also block queries sent from apps on your devices that originate outside the web browser; blocking excessive operating system telemetry (are you using Windows or macOS?) is a possibility as well. Once the DNS request is blocked, there is typically no connectivity, so you're safe - as a side effect, you're also saving bandwidth on your network too! While the primary focus of this post is to examine uBlock Origin and Pi-Hole and how they compare, it's important to understand that they don't exist in a vacuum. It's definitely important to consider other factors - the total picture - in addition to tracker blocking. And admittedly, there are a lot of things to consider. Some of the basic points you can springboard from are: It's also worth mentioning that there is other software out there can complement your usage of uBlock Origin and Pi-Hole. For examples, perhaps you would like to self-host your own recursive DNS resolver by running an Unbound server - or perhaps you're savvy and confident enough to securely set up your own VPN so that you can take advantage of your network while in a different physical location. Perhaps you seek to complement uBlock Origin's blocking functionality with a more "niche" blocker - such as LocalCDN, which blocks CDN connections and injects the content locally. Or maybe you're concerned about fingerprinting and are looking for more protection, so you opt to run another extension that hardens your browser's resistance to various fingerprinting techniques. When it comes to Pi-Hole and uBlock Origin, it doesn't have to be either or. They are both valuable when it comes to receiving comprehensive ad and tracker blocking protection. In fact, they should both be used as these software pieces greatly complement each other. You can receive even more enhanced benefits by customizing them exactly to your liking, or even using other trusted blocking software. With that said, if there is a situation where you have to choose, in many cases you'll find that uBlock Origin is the definitive answer because of its insanely easy installation and deployment; again, you download the add-on or extension for your browser and with near-zero set up time, you have one of the most effective browser adblockers on the market. Additionally, given the number of people who may be using their ISP's router, Pi-Hole becomes a less attractive option. However, if you have the opportunity, you should definitely set it up as its a fantastic benefit to your entire network as opposed to a single browser/device! With all of that said, stay safe out there!
OPCFW_CODE
Does not support subtests (available since Go 1.7) https://blog.golang.org/subtests Just an improvement request, I make heavy use and here's a snippet of my output: ##teamcity[testFinished timestamp='2018-12-08T20:05:32.338' name='Test_DB'] --- PASS: Test_DB/memory (0.00s) --- SKIP: Test_DB/memory/ensures_that_ref_starts_with_refs/ (0.00s) ref_test.go:43: not implemented yet --- PASS: Test_DB/memory/various_read/write (0.00s) --- PASS: Test_DB/memory/various_read/write/returns_true_when_writing_a_ref_for_the_first_time (0.00s) --- PASS: Test_DB/memory/various_read/write/returns_false_if_hash_is_unchanged_in_store (0.00s) --- PASS: Test_DB/memory/various_read/write/returns_true_if_hash_is_changed_overwritten_in_store (0.00s) --- PASS: Test_DB/memory/various_read/write/retrieves_an_existing_object_if_already_in_store (0.00s) --- PASS: Test_DB/memory/various_read/write/returns_unknown_ref_error_for_non_existent_objects_in_store (0.00s) --- SKIP: Test_DB/memory/various_read/write/write_symbolic_fails_if_the_ref_name_is_not_all_caps (0.00s) ref_test.go:80: not implemented yet --- PASS: Test_DB/memory/various_read/write/write_symbolic_returns_true_if_the_symbolic_ref_was_created_or_changed (0.00s) --- PASS: Test_DB/memory/various_read/write/write_symbolic_returns_false_if_the_symbolic_ref_was_created_or_changed (0.00s) --- PASS: Test_DB/memory/various_read/write/retrive_symbolic_returns_symbolic_ref_correctly (0.00s) --- PASS: Test_DB/memory/various_read/write/retrive_symbolic_returns_error_on_non_existent_ref (0.00s) --- PASS: Test_DB/memory/listing_objects (0.00s) --- PASS: Test_DB/fs (0.03s) --- SKIP: Test_DB/fs/ensures_that_ref_starts_with_refs/ (0.00s) ref_test.go:43: not implemented yet --- PASS: Test_DB/fs/various_read/write (0.03s) --- PASS: Test_DB/fs/various_read/write/returns_true_when_writing_a_ref_for_the_first_time (0.00s) --- PASS: Test_DB/fs/various_read/write/returns_false_if_hash_is_unchanged_in_store (0.00s) --- PASS: Test_DB/fs/various_read/write/returns_true_if_hash_is_changed_overwritten_in_store (0.00s) --- PASS: Test_DB/fs/various_read/write/retrieves_an_existing_object_if_already_in_store (0.00s) --- PASS: Test_DB/fs/various_read/write/returns_unknown_ref_error_for_non_existent_objects_in_store (0.00s) --- SKIP: Test_DB/fs/various_read/write/write_symbolic_fails_if_the_ref_name_is_not_all_caps (0.00s) ref_test.go:80: not implemented yet --- PASS: Test_DB/fs/various_read/write/write_symbolic_returns_true_if_the_symbolic_ref_was_created_or_changed (0.00s) --- PASS: Test_DB/fs/various_read/write/write_symbolic_returns_false_if_the_symbolic_ref_was_created_or_changed (0.00s) --- PASS: Test_DB/fs/various_read/write/retrive_symbolic_returns_symbolic_ref_correctly (0.00s) --- PASS: Test_DB/fs/various_read/write/retrive_symbolic_returns_error_on_non_existent_ref (0.00s) --- PASS: Test_DB/fs/listing_objects (0.00s) PASS ok github.com/retro-framework/go-retro/framework/ref 0.074s testing: warning: no tests to run PASS ok github.com/retro-framework/go-retro/framework/repo 0.043s [no tests to run] FAIL github.com/retro-framework/go-retro/framework/resolver [build failed] ##teamcity[testStarted timestamp='2018-12-08T20:05:32.354' name='Test_Storage'] ##teamcity[testFinished timestamp='2018-12-08T20:05:32.354' name='Test_Storage'] PASS ok github.com/retro-framework/go-retro/framework/storage 0.038s testing: warning: no tests to run PASS ok github.com/retro-framework/go-retro/framework/storage/memory 0.058s [no tests to run] make: *** [test-units] Error 2 Can you make pull request? Thx
GITHUB_ARCHIVE
[Issue] Typing a Chat Message Causes a Massive Lag Spike Describe Typing in chat creates a massive lag spike. The tick in which the messages are processed ends up lasting for (tens of seconds to multiple minutes). The time is getting progressively longer on our server. Reproduce Currently we have a world with a self propelling mining machine built with Create. The machine is being chunk loaded using 4 chunky turtles. Three of those turtles are attached to the miner itself, and are moved by Create, and the last turtle moves by itself. These are the only cc-tweaked computers or turtles that exist in the world. This miner has moved roughly 55,000 blocks and the issue seems to be getting worse the longer the world (and the miner) run. Steps to reproduce the behavior: Make a self-propelling flying machine with Create. Attach a chunky turtle to the machine. I suspect any kind of turtle will do. Let the machine run for a long time. Try typing messages in chat and there should be a lag spike as the message is processed. Suspected Cause Profile when typing a chat message: https://spark.lucko.me/jImuCwBmBz For context, I am a Java programmer, but I'm not a forge modder. Based on my understanding of the source code, this is what I think is causing the issue. When a chat message is sent, it is handled by the Events class. Those event handlers then both call getTileEntities from the TileEntityList. In the case of my world, getTileEntities takes a very long time. As far as I can tell, the TileEntityList is supposed to be a cache of tile entities relevant to advanced peripherals. If my understanding is correct, then in theory this should only have 4 entries for the 4 turtles in my world. Given how long the tick times are taking when processing messages, I suspect that this cache is instead growing without having entries properly removed. Not sure if this is a result of them being moved by Create or just that they have moved a large number of blocks. Suggestions Here are some suggestions. I'm not super aware of the intricacies of forge, but here are some options to optimize the getTileEntities call. Only subscribe to the server chat event once, and only iterate through the cache of tile entities once. Based on the type of tile entity then delegate to separate methods to handle it for either the turtles or the computers. This would remove one of those two calls to getTileEntities. Consider using a Set instead of a List for the underlying data structure. See TileEntityList.java#L21. This would avoid the need to do contains checks when adding items to the cache. It would also make contains checks significantly faster with large numbers of entries. It looks like setTileEntity is a toggle for whether a tile entity block position should be included in the cache. If it is already in the cache, remove it. Otherwise, if it isn't in the cache add it. I would suggest switching this to two methods addTileEntity and removeTileEntity to avoid accidentally removing a tile entity when trying to add it or vice versa, which should make it easier to ensure the cache is updated correctly. I think this would avoid the need for the setTileEntity method with the force parameter too. There are a few potential issues with getTileEntities itself. public List<TileEntity> getTileEntities(World world) { List<TileEntity> list = new ArrayList<>(); for (BlockPos next : new ArrayList<>(getBlockPositions())) { if (world.isAirBlock(next) || !world.isBlockLoaded(next)) setTileEntity(world, next); //No block here anymore. if (world.getTileEntity(next) == null) setTileEntity(world, next); //No tile entity here anymore. list.add(world.getTileEntity(next)); } return list; } It starts by iterating through the different block positions, but if setTileEntity is called, then it needs to iterate through all of them again when it's performing the contains check in setTileEntity here. This makes this a very expensive method to run if the cache is large since the check is O(n^2). Even if the tile entity was removed from the list by the first if statement, it seems like it could potentially be re-added by the second if statement. This is not something I'm super familiar with, but it seems like it's reasonable that a block could be air and have the tile entity at that position be null at the same time. Since currently, setTileEntity toggles whether the block position is in the cache. The net result is that the position that should have been removed twice instead remains in the cache. Finally, even if it was removed from the cache, it is still added to the final result List to be returned, which seems like the wrong behavior. Versions: Mod Pack (All the Mods 6): 1.6.1 Forge version: 36.1.2 AdvancedPeripherals version: 0.4.5b Crashlog/log (Use https://paste.gg or https://pastebin.com to upload your crashlog/log) This doesn't cause a crash. The world where this is happening is pretty large (multiple GB), but I can potentially provide a copy if needed. Even if the tile entity was removed from the list by the first if statement, it seems like it could potentially be re-added by the second if statement. This is not something I'm super familiar with, but it seems like it's reasonable that a block could be air and have the tile entity at that position be null at the same time You're right, I will fix that. Could you do /advancedperipherals debug and send me the output? You can click on the message. It's a very very very long list. It's too big for pastebin. pastemyst could work, but I know now the issue. It was too big for pastemyst too. This isn't even all of it since the scrollback on the terminal where the server is being run is limited, but here it is. Turns out GitHub gists can be fairly large: https://gist.githubusercontent.com/cwenck/61e3dd6156e692d71defd6fe9feac9e1/raw/3a61500e1db6073433648f43817fb3cdbbba82dc/gistfile1.txt advancedperipherals-0.5.1b.zip Could you try it with this build? Unzip it before using. Does that mean it works better now, or that you will test it? 😅 Sorry, I meant that to mean that I will test it. It does look like entries are getting correctly removed now when a chat message is types. The list goes down to something like this: [BlockPos{x=48505, y=79, z=-897}, BlockPos{x=54109, y=79, z=-897}, BlockPos{x=48514, y=78, z=-898}, BlockPos{x=54118, y=78, z=-898}, BlockPos{x=48516, y=78, z=-897}, BlockPos{x=54120, y=78, z=-897}, BlockPos{x=54109, y=79, z=-899}, BlockPos{x=48505, y=79, z=-899}] There is one thing that will still potentially an issue though, although it's less problematic. I'm assuming the cache only gets cleared when a message is typed in chat, so it will still potentially build up to a larger than ideal amount given enough time without a message being sent. I will release that tomorrow! Fixed this issue with v0.5.2b It clears now non-existing tile entities from the list. Thank you for the report.
GITHUB_ARCHIVE
Coordinating Your Online Channels Aspiration Communications Matrix introductions + problems: consultant to non-profits: -most non-profits don't have an online communications plan communications person for a non-profit: -had a matrix, it was very helpful -there is value when you come in cold into an advocacy campaign -show how they are doing social media cpenn: part of original aspiration cohort -aspiration works with non-profits to help them build online communications -focus on building a communications matrix -consultant needs their own communications matrix issue: there's a matrix in place -there's a primary message sender that doesn't actually check the matrix -it's just a documentation file that's useful when change of head -but is it used daily? any tools to help this out? aspiration publishing matrix -simple spreadsheet -a lot of nonprofits dont think critically about what kind of content they're putting on social media -content types ROWS -channels COLUMNS -barebones skeleton for your communications plan content types: -anything your organization creates -examples: enewsletters, event announcements, action alerts, fundraising appeals -blog posts, tweets, new staff announcements -all the different things you are publishing -this list can get exhaustive -amazing: you are creating a LOT of things -almost like writing out your job description channels: -places you are putting information -example: website, blog, facebook, email, twitter -offline channels: texting, phone, mail, paper newsletter -amazing how many channels people have but don't remember -this leads to constant missed opportunities -anecdote: we had all these youth videos that we never put on youtube or facebook! -this was a missing content type: VIDEOS! completed publishing matrix: -fill in with Xs -don't need to know software to fill it out, just know how to type -X denotes action taking place -creating workflow: whenever i have an enewsletters: -i send to me email list -also twitter +facebook -if i'm not in tomorrow, someone else will do it for me because we haven't done that yet -print it out and post it on the wall -by writing it down, there's accountability fundraising appeal: -people worry about donor fatigue -there's no process at a lot of organizations governing "When do we ask for money, when do we not?" -a process lessens fear for fundraising appeals -aspiration tries not to fundraise on blog, twitter, facebook (different for every organization) -aspiration does fundraise on email though helpful for the person doing it- one online communications person -what about more people? add staff names to boxes -if you want more accountability: just split the boxes (NEED TO DO) (DONE) -hack the format to be however you want -matrix is a PERMANENT DOCUMENT not a working document -documenting the workflow -instead of X in the column, actually describe the workflow: "post a bitly link of the newsletter" in twitter column non-profits like to make things complicated! -but this matrix is very simple question to group: would this be useful? -yes, we have written things out but not in a chart format? -sample flow: fundraising calls, annual report, at least one post on facebook -sample flow: workshops open to facebook = on facebook + blog having the info is great, thinking it through is great, but -things come up! -could be policy that fundraising appeal doesn't go out on facebook -but somebody does it anyway -questions of enforcement!! Question: are there automation tools for loading message content and sending message out over channel on an arbitrary date?? HootSuite: collaborative automation tool -but somebody needs to be a gatekeeper -gatekeeper previews someone's stuff: maybe a week in advance, allowing time to review -scheduled publish, moderation, has analytics -but it costs money -free dashboard -has twitter, linkedin, facebook tabs Twuffer: free tool - twitter buffer - schedule tweets Tweetdeck : twitter tool Personal tweaks for the workflow matrix: -great for hiring someone, what tasks have people been doing -it's a spreadsheet not a web app so you can easily modify it -anecdote: modified twitter, got buy-in from policy team to tweet breaking news while in hearings -not my responsibility to tweet as communications person, but my responsibility to remind them -tweets can happen organically, not scheduled -anecdote: policy briefs, which audiences to pass it out to -anecdote: ethnic media knowing where your capacity is at: -if you don't have time to sign into twitter and be there for the conversation -you probably don't have time to tweet -that's where the value is, if you have time to update it -twitter's not a megaphone, it's a conversation sample matrix for aspiration -specific workflows for stuff that needs to get done for trainings+engagement -channels up top: discussion list, eventbrite (for trainings), flickr, pdf flyer for partners -channels cont: aspiration website, facebook, twitter -specific to our project, all of the channels -content types: training announcement -training reminder -blog post -content types: post-event recap needs to be on the website -content types: pictures need to go to flickr -spanish content: remind myself with asterisks ->external communications vs internal communications organizational matrix -same matrix but with peoples' names, so if anybody has questions we know who to ask personal matrix: -has X's how to use it: 1. start with a framework that helps get you started 2. modify it to make it your own 3. don't have to look at it everyday, more helpful in the beginning so many people we know have made publishing matrixes -but not everybody uses it on a daily basis -could get user feedback on how to make it better -a lot of nonprofits arent thinking about their online communications -helpful to validate a job role: justify funding for the tech/social media person "how hard is it to update facebook? just do it. throw it up real quick!" "actually there's a workflow with a headcount associated." -especially for volunteer + intern -a lot of complexity that is taken for granted how to get started? 1. as the person doing the social media : write down all the content types 2. then write all of your channels 3. then put in the Xs - ones you don't know about, use question marks 4. bring draft to the staff and ask "What are we missing? What should change?" 5. open draft up for discussion instead of trying to draft in a group anecdote: events coordinator since 13 -but never operationalized what i do -helps for organizational transition - how to put on a successful event? -once you know how to do something, u realize how valuable an asset that is to pass on -instead of just getting new hire with undocumented expectations -make sure someone other than you has a twitter password!!! critique? is it a good thing? it seems like a good place to start, but is lacking frequency and how effective the channels are it's just the map great for job validation! entry point, do it for three months then let's talk takes 3-4 months to make ripple in the water tool: social report -plugin facebook + twitter -maps how busy it is -how many are listening how many RTs -show you how much time you're wasting four processes at aspiration: 1. audience assessment 2. publishing matrix 3. message calendaring 4. social media dashboarding/listening -where are you being mentioned online? topsy: news clipping service for twitter -look at hashtags or twitter names over past 30 days -twitter stopped other tools: mapping occupy oakland, traaker questions for publishing matrix? hwo to make one? 1. Who is the audience of this channel ? (Strangers? Fans? Founders? Members?) 2. Do we want to engage that audience for this content type? (is it spam? is it valuable?) 3. If yes, how do we want to engage them? Picture? Blog post? Full content piece
OPCFW_CODE
Object with integer property causes TypeError when patternProperties are defined Example: from jsonschema import validate schema = { 'type': 'object', 'patternProperties': { '^[0-9]{3}$': {'type': 'string'} }, } validate({200: 'hello'}, schema) Result: Traceback (most recent call last): File "/Users/jstewmon/Library/Application Support/IntelliJIdea2016.1/python/helpers/pydev/pydevd.py", line 1530, in <module> globals = debugger.run(setup['file'], None, None, is_module) File "/Users/jstewmon/Library/Application Support/IntelliJIdea2016.1/python/helpers/pydev/pydevd.py", line 937, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/Users/jstewmon/Library/Preferences/IntelliJIdea2016.1/scratches/scratch_158", line 11, in <module> validate({200: 'hello'}, schema) File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 478, in validate cls(schema, *args, **kwargs).validate(instance) File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 122, in validate for error in self.iter_errors(*args, **kwargs): File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 98, in iter_errors for error in errors: File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/_validators.py", line 14, in patternProperties if re.search(pattern, k): File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/re.py", line 146, in search return _compile(pattern, flags).search(string) TypeError: expected string or buffer A similar, but discrete, error occurs when additionalProperties is False: from jsonschema import validate schema = { 'type': 'object', 'patternProperties': { '^[0-9]{3}$': {'type': 'string'} }, 'additionalProperties': False } validate({200: 'hello'}, schema) Result: Traceback (most recent call last): File "/Users/jstewmon/Library/Application Support/IntelliJIdea2016.1/python/helpers/pydev/pydevd.py", line 1530, in <module> globals = debugger.run(setup['file'], None, None, is_module) File "/Users/jstewmon/Library/Application Support/IntelliJIdea2016.1/python/helpers/pydev/pydevd.py", line 937, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/Users/jstewmon/Library/Preferences/IntelliJIdea2016.1/scratches/scratch_158", line 11, in <module> validate({200: 'hello'}, schema) File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 478, in validate cls(schema, *args, **kwargs).validate(instance) File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 122, in validate for error in self.iter_errors(*args, **kwargs): File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/validators.py", line 98, in iter_errors for error in errors: File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/_validators.py", line 25, in additionalProperties extras = set(_utils.find_additional_properties(instance, schema)) File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/site-packages/jsonschema/_utils.py", line 100, in find_additional_properties if patterns and re.search(patterns, property): File "/Users/jstewmon/.virtualenvs/pre-commit-swagger/lib/python2.7/re.py", line 146, in search return _compile(pattern, flags).search(string) TypeError: expected string or buffer You are trying to validate an property name by giving an number instead of a string eg. test with validate({'200': 'hello'}, schema) notice the string instead of the number 200 vs '200' If you actually want to verify a number as an property name, I don't thinks this is supported, but I might be mistaking. @joepvandijken , you're right, but I think the expected result should be a ValidationError. The actual result is a leaked TypeError. #286 has further discussion. Let's keep discussing on the PR, closing this out, but if anyone else wants to chime in, #286
GITHUB_ARCHIVE
We hope you're as excited as we are for ProgressNEXT, and are ready for day three which begins today! We'll be bringing you all the latest updates from the event live on this blog post throughout the day, so be sure to come back often to see what's going on as this post will be updated regularly. First things first - if you can't catch it in person, we will be livestreaming the ProgressNEXT Keynote Speakers, so that you can watch it from wherever you are. Just follow this link or click below to stream the keynotes live. The keynotes begin right now! Stephen Fluin, a Developer Advocate at Google, joined us on the main stage to talk about the cool things Angular is doing for application development with the web. Steve noted that the fundamentals of app dev have changed - it's more important than ever to optimize for the user, which not only means a beautiful UI but also the flexibility to work on the devices that users want to use. If you think something doesn't work on mobile as well as desktop, Stephen challenged you to think about whether it's because of the technology or the way you've designed your experience. He recommended an "optimistic UI" where users are instantly rewarded when they interact with you on the assumption that things will work right, rather than giving them a waiting bar. He talked about how the goals of Angular are really aligned goals of Progress - more user engagement, a reduced time to market and a lower cost for app dev. NativeScript, which Steve noted was the best way to build native apps using Angular, is a key way of doing that. He was pretty excited about collaborating with us on making Kendo UI for Angular even easier for existing Angular devs to use too. Richard took us through a history of Microsoft and .NET - starting even before the first version shipped in 2002, through the introduction of Mono, the Mix06 conference where IE7 was finally introduced and the evolution of Silverlight, gradually showing us how .NET became more and more open in the late aughts. Then Azure comes out in early 2010, followed by the iPad later in the same year, which banned both Flash and Silverlight from its browser, and then the release of VS10 which shipped with the open source jQuery - which was a huge statement for open source. Later that year at the final PDC, we began to bid farewell to Silverlight. At this roundtable, Loren led an all-star panel of Yogesh (CEO), Dmitri (CTO), Faris (GM of DevTools), John (SVP of Core Products), and Todd (VP Product for Kinvey and NativeScript) in answering a number of questions that had been submitted. There were too many questions to cover them all here, but here are some key takeaways: Loren then took a moment to thank all of us for being here, and the events team led by Leah Depolo. Thank you to everyone involved! Loren also announced the location of ProgressNEXT 2019 from May 6th-9th next year, in Orlando Florida. Carl dove into a live demo of Kendo UI Builder in action (now built on Electron, so it is cross-platform for both Windows and Mac). It was simple to create an app and get started. Carl opened the pre-built application module and customized his app with a new logo. Then he quickly added a data provider to connect to the swapi.co Star Wars API and created a new module that contained a datagrid, and after a few moments of automated construction the data we were looking in our grid (which he then customized). This was all with no coding and happened impressively fast. This session began with a brief history of AR, starting from room-sized devices and moving to the first-and-ten yellow line marker that NFL fans may recognize, which once required a truck to generate and now just needs a single computer. As computing power has grown, we've moved to Google Glass, HoloLens and other more recent AR devices. It's only within the last year that we've gotten serious with AR on phones, and TJ took us through some cool demos that let you try out Ikea products in your room (pictured above) or place funny GIFs in the air or even try on makeup and translate text in real-time. Next, TJ showed us how to make our own cross-platform AR mobile apps in NativeScript, and how easy it is to add an event that creates objects like blocks and spheres that he could stack and roll off of a plane. Still, he cautioned that while this is cool it's still an emerging technology and is far from perfect yet. In this session Garon Davis described some of the key advantages of a self-service chatbot like NativeChat, such as empowering users to find answers quickly and reducing the load on the customer support center so that they can address the truly important or complex concerns. He went on to describe the power you get when you combine this with a serverless backend like Kinvey. In a live demo, he showed us how easy it is to hook NativeChat up to Kinvey and then seamlessly switch the source of your data with a few clicks and minimal coding - no need to go to IT or redeploy anything. We also touched on how to train your NativeChat bot with CognitiveFlow. And that's it! With the conclusion of this final breakout session, ProgressNEXT draws to a close. Don't forget to head over to the ProgressNEXT page to see information on ProgressNEXT 2019, as well as pictures from this year's event. This was an amazing event with much more to learn than any one person could go to, but we hope this live blog brought you a taste. See you in Orlando next year! Daniel is passionate about technology and has been writing about the industry for the better part of a decade. He is excited to be part of the communications team at Progress, and is dedicated to creating and optimizing the very best content around the company. Subscribe to be the first to get our expert-written articles and tutorials for developers! All fields are required
OPCFW_CODE
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index] my opinions about localeio: 1. please don't install LC_* those codeset is not supported by such as ISCII-DEV(LC_CTYPE that i maintain keep this rule). 2. LC_MESSAGES locale-db, /usr/share/locale/*/LC_MESSAGES may conflicts with some gettext(3)'s mo messages catalog. they're frequentry stored as /usr/share/locale/*/LC_MESSAGES/*.mo . # in many case, GNU configure set bindtextdonain(3)'s 2nd argument to # "/usr/share/locale" by default when using -DLOCALEDIR macro. AFAIK, glibc2 uses /usr/share/locale/*/LC_MESSAGES/SYS_LC_MESSAGES for LC_MESSAGES locale-db. # in the past, they had been directly used gettext(3) mo catalog for # libc's LC_MESSAGES locale-db, /usr/share/locale/*/LC_MESSAGES/libc.mo. don't forget we already have own BSDL libintl implementation in base. i believe it is reasonable that /usr/share/locale/*/LC_MESSAGES is S_IFDIR. 3. LC_TIME locale-db still lacks ERA, ERA_D_FMT, ERA_D_T_FMT ERA_T_FMT langinfo stuffs(and LC_MONETARY locale-db doesn't have CRNCYSTR too). thus, sooner or later we have to change file format of LC_TIME, LC_MONETARY locale-db, aim to support such langinfo stuffs. # as for me(=with ja_JP.* locale), ERA stuff is the very very familiar one. at this point, no magic, no version controlled locale-db format is not good idea. # file format has a big influence on backward binary compatibility. # we can't easily change file format even if we have to do. to introduce flexibility, i think it's better to use key-value pair db format. src/lib/libc/citrus/citrus_db*.[ch] stuff may good for this purpose. # easy to use as match as plain-text, i believe. files under /usr/share should be MI, because these can be shared among different MACHINE_ARCH by NFS etc. of course db file generated by citrus_db*[c.h] is MI. # FreeBSD's LC_CTYPE/LC_COLLATE is installed in /usr/share, # but their format was MD, they don't use intNN_t/uintNN_t stuff and # pay no attension to byteorder(3)... thus, they shed blood # at the FreeBSD 6.0-Release, see more: or use different namespace(as Solaris does) for plain-text db, like: if we change file format, simply change version sufix: old/new ABI keeps very happy(except LC_CTYPE...but it has magic). 4. as for localedef(1), we might have to keep charmap's symbol-name in LC_* locale-db, because localedef(1) require "copy" instruction such as: SUSv3 spec is very ambigious about ``where do we *copy* from information?'' if this means: ``copy from /usr/share/locale/en_US.UTF-8/* that compliled by localedef(1)'' we have to restore from (multi)byte-sequence in plain-text db to charmap's symbol-name, it is *impossible*(yes i know LC_CTYPE too). it seems that glibc2 people interprets this ambigiousness that ``let's install localedef src file in /usr/share/i18n/localedef, and copy from it, yeah!'' but i suspect that this is a result of the compromise. Solaris, they don't copy information from but /usr/share/locale/en_US.UTF8/* stuffs, as far as i know from # localedef(1) is quite a beast from ``spec then code'' outer space. # please read my past tech-userlevel's post: very truly yours. Takehiko NOZAKI <tnozaki%NetBSD.org@localhost> Main Index | Thread Index |
OPCFW_CODE
Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan. Minutes of the JEE 5 Working Group meeting Jan 04, 2007 Teleconference on JEE 5 Support in WTP Jan 04, 2007 - Chuck Bridgham - Shaun Smith - Kaloyan Raev - Hristo Sabev - Jesper S Møller - Paul Fullbright - Dave Gorton (BEA) - Paul Andersen - Neil Hauge - Rob Frost - Naci Dai - Action Items: - Chuck Bridgham- Review EMF 2 XML patches [160567,160569,164246] - Review M3 & M4 status for EJB 3.0 Facets and Projects - Review JEE 5 Model Extension Point Design (Hristo Sabev) - Other items - CB: I have applied the patch for 164246 code patched - I will release the code after some testing and M4 is out. - JM: I am working updating the pacth for multiple attribute values and references, and adding some tests. - CB: We appreciate the work you have done. Is multiple attribute work additional - Can the defects be merged? - JS: They can be (tracked) using the same defect, but dependent objects needs to be more general. - CB: EJB3 facets are in the code and ready to be tested, We will release it M4 is out. Used Kosta suggestion to specify the version tag and fixed the problems. - ND: have you been able to compare what you have done with JBoss' defects for EJB3. Max send an email to the dev list. - CB: Yes, I think they are doing smt similar. They had some requests for server modules (not to be confused with multiple JEE modules in a project). - ND: We should ask Rob Strykey for JBoss to merge these defects so that we can track the request. We should get Tim and Kosta involved. - ND: Thanks to SAP and Hristo for providing the rather extensive draft document for JEE model extensibility (see bug:167807). Can you please walk use thru the doc. - HS: It is still very draft - The document presents a single model to provide structure - a graph, and with nodes in the graph that can link corresponding model elements to semantics (i.e. business interfaces). This model should be common to all projects (i.e. package structure, than xml, certain classes are EJBS, other are webservices). Each node should be extensible. Contribute nodes and extensions. Basically, we will have a universal tree - add extensions to the tree. ModelBuilder creates the tree, adopters create extensions for the elements. Use facets to filter the projects. There are categories for organizing models. Extenders can specify the elements of the tree does their extensions apply to. - ND: So, would our existing EJB model would be an extension (node) in this tree? - HS: Yes - ND: And, would we use this tree for things such as JEE explorer? - HS: Yes views should be able to use the tree - Maybe with the project explorer. Another issue is synchronization of the underlying resources with the model elements and the tree. I think EMF model has to be regenerated for this case. - ND: This proposal is rather extensive. There are many good things in it, for example the ability to attach different modeling technologies (other than EMF) I do not think we would be able to absorb it into WTP by 2.0. However, we should understand what is involved in making the existing models co-exist with it - DG: I agree - CB: Builder is a good direction to go. But a lot of work to support model element. This model is basically a DOM tree is it correct? HS: It is more than DOM, for example it can support navigation, and we should not have to change to much API. Configuration issue is not clear to us. We need more examples to understand what is needed for model elements. - DG: Does this suggest new public APIs for WTP models? - HS: Yes I would think so... New models can inherit from existing WTP modeld but we should supply other models too. So there can be other modelling technologies (non EMF). And support things like navigation. - RF: Impact on existing code that uses EMF models is a concern. Is there a path for incremental adoption. - HS: It should, we do not have to touch the existing code. New extensions can build on it. - CB: It would help provide different ways to support thinhs like XML deployment. We should definitely look at what is available for supporting models lik ethis. - RF: Existing modelling technologies and support things like rendering. - JM: Is it good to move away from EMF? - HS: External editors (outside eclipse) EMF model needs to reconsructed sometimes? - JS: Adapters solve that problem (EMF 2 XML), or reconcilliation. - CB: That is not specific to EMF. We have listeners to do that for EMF - We recreate the model. - HS: We are not against EMF, but need other tech to be included in models (i.e. Annotations) - CB: Our edit models do that - track multiple resources on a single model. Load and sync collectively. This is EMF specific but not generic for all model tech. - RF: So this can be applied to annotation model of Java files - CB: Annotations are complicated (i.e. override vs extend). We may need smt to consolidate. - RF: Evolution of the edit model should be looked - ND:- Thank you for attending this call. Please comment on the SAP proposal using bugzilla. We will meet again next week to discuss it. Please add your comment here:
OPCFW_CODE
All uniform locations are "-1" on MacOS X I am experiencing really weird bug.I am porting some OpenGL codebase to MacOS X 10.7.5 The OpenGL code is suited for GL3.2 version.Original version (on Windows/Linux") works fine.No errors in GLSL or OpenGL side.But on Mac, when trying to access the uniforms all the locations are "-1".It is problematic to put out here the whole code as it is wrapped into framework(also I am sure 100% it is written correctly as it was tested to great extent on other platforms),but here is some of GLSL code: #version 150 core uniform sampler2D tex; uniform vec2 dir; uniform float cont; noperspective in vec2 uv; out vec4 colorOut; void main(){ ... All the uniforms are in use by the shader so it is unlikely that GLSLcompiler optimizes them out. UPDATE: Ok,I have some advance in pinning down the problem.Somehow, it seems that the character string, I pass to a method which retrieves uniform location,gets truncated.Here is the test: Explicit call to : GLuint loc1 = glGetUniformLocation(shaderEmboss->getHandle(), "dir"); returns the location al right. But if I pass the location name as param "const GLchar* name" , I can see in the XCode debugger only the first char of the string. Regarding the update: strcmp() evaluates to 0 (false) if the strings are equal. Did you request a 3.2 rendering context from the OS? Check with glGetIntegerv( GL_MAJOR_VERSION, &major ); and glGetIntegerv( GL_MINOR_VERSION, &minor ); @umlum Oops! .Yes, 3.2 core is requested and it is up as the shader program gets compiled and linked ok.But why then I am getting valid location when passing the name string directly? That it works correctly on other platforms does not mean that your code is correct, it also could be that you e.g. the driver/card is less strict about the code (I often have seen that the driver of OS X in core 3.2 is more strict then e.g. NVIDIA drivers on Windows) Did you check the output of glGetShaderInfoLog and glGetProgramInfoLog? And did you check if the final program is valid with glValidateProgram? Yep,you are right man.Here is what the validation throws:"current draw framebuffer is invalid".Why?I do create custom FBO at later stage during rendering, but how is it connected to program validation? IfI validate after FBO bound it passes ok.So the program is valid. The problem with OpenGL and its drivers is that once an error occurred your context is (could be) - improperly said - in an undefined state. So if your program fails at one point does not mean that the error necessarily occurred there. The - what i think - best way to avoid such problems is to check EVERY gl* statement with an glGetError (and additional error infos like glGetShaderInfoLog, glGetProgramInfoLog, ...) afterwards using an assert that you could control with a flag (so that you are able to let it be optimized out by to compiler in production code) Checked with glError all the way.There are none. @MichaelIV so also glGetUniformLocation does not give you a glGetError ? Could you check how many active uniform you have with glGetProgramiv(m_program, GL_ACTIVE_UNIFORMS, &count); and show their info if you have active ones with glGetActiveUniform? let us continue this discussion in chat First thanks to @umlum for debug help.I managed to pin down the issue.It has to do with multiple OpenGL context.My program is plugin in Adobe AfterEffect(AE).AE during runtime spawns its own contexts at will.In my code each time I render OpenGL stuff I was enabling my context like this (MaxOS X): _context = CGLGetCurrentContext() ; CGLSetCurrentContext(_context); Calling the first line appeared to be grave mistake!What happens (probably) is that at the time I am trying to access the (supposedly mine) current context, it is already not my context but one of AE contexts which the program manages independently. _context = CGLGetCurrentContext() - I commented this line and it seems to be working now. I figured it all out by using glGetActiveUniform() Running it I found that my shader program contains uniform names which it doesn't really have.So it became apparent that my program's handle probably references a program from different context.
STACK_EXCHANGE
Computer - Multimedia Classes 24/10/2014 Computer - Multimedia Classes "Join SQL Server 2008 Analysis Services SSRS Training in Chennai @ Kyros TechnologiesJoin SSAS 2008 Training in Chennai @ Kyros TechnologiesSQL Server 2008 Analysis Services SSRS In this course, you will learn to design, develop, and deploy an analytical solution using SQL Server 2012 Analysis Services (SSAS). This course demonstrates design and development best practices as you build a fully working analytical solution. SQL Server 2012 Analysis Services (SSAS) enables IT Professionals to rapidly build and deploy powerful analytical solutions that enable business users to analyze business data and achieve competitive advantage. SSAS enables rapid application development, increases performance and functionality, and reduces the costs and complexity of operation. This course focuses on teaching IT professionals the skills and best practices required to design and develop a well performing and successful analytical solution using SQL Server 2012 Analysis Services. ALL ABOUT MSBI @ Kyros TechnologiesWe cover SSIS,SSAS and SSRS in our training program. We give equal importance to each and every component of BI. From our experience we have seen that SSAS needs a bit more effort in getting the concept clear. In order to make sure that you are comfortable with SSAS. We are putting more effort on it.TRAINING WITH DIFFERENT LEVEL @ Kyros TechnologiesThere are many different ways of training. We use some of the best methods to make the curriculum interesting and more practical. BI is not a theory based component. We put a big effort on teaching the complex fundamentals of multidimensional database in the best and easy way.WHAT WE DO IN Kyros TechnologiesWe are providing the excellent training in MSBI. We have experienced trainers from industry. We do believe that BI is not completed without functional knowledge of domain. We also provide project hands on in the end of the course.About this Course:This course will provide you with the knowledge and skills to configure and manage a Microsoft SharePoint Server 2010 environment. This course will teach you how to configure SharePoint Server 2010, as well as provide guidelines, best practices, and considerations that will help you optimize your SharePoint server deployment.Training Options:Class RoomOnlineCorporateCrash CourseLearn MICROSOFT BUSINESS INTELLIGENCE TRAINING@ Kyros Technologies • MSBI Training in Chennai• Microsoft Business Intelligence Training in Chennai• Microsoft BI Training in Chennai• MS Business Intelligence Training in Chennai• SSAS2012 Training in Chennai• SSRS2012 Training in Chennai• SSIS2012 Training in Chennai• SSAS Training in Chennai• SSIS Training in Chennai• SSRS Training in Chennai• SQL Server 2012 Integration Services Training in Chennai• SQL Server 2012 Reporting Services Training in Chennai• SQL Server 2012 Analysis Services Training in Chennai• SQL Server 2008 Analysis Services Training in Chennai• SQL Server 2008 Integration Services Training in Chennai• SQL Server 2008 Reporting Services Training in Chennai• Data Quality and Master Data Management with SQL Server 2012 Training in Chennai• SharePoint 2013 Developer Training in Chennai Real time Experienced Faculty Real-time scenario project training Material soft copy Lab manual soft copy Flexible timing Weekend batchesFor course contents and duration please visit our website.Contact Us:Kyros TechnologiesNew No:28,old No.32,2nd Main Road, Kasturibai Nagar,Adyar ,Chennai – 600 020Land Mark: Near Adyar Ambika Appalam Signal / Above Central Bank of India / Near Kasturibai Nagar Railway Station/ opp To Nilgris ,KFCMail Id: firstname.lastname@example.orgWebsite: www.kyrostechnologies.com, www.kyrostechnologies.inPhone: 044 65152555 Mobile: 9600116576Join MSBI Training in Chennai by Kyros and Get PlacedPosted by Kyros. MSBI Training in ChennaiLooking for MSBI Training in Chennai. Join Kyros http://www.kyrostechnologies.in/ssas-2012-training-in-chennai/"
OPCFW_CODE
Hey there! A long time ago I shared my markdown experience in PyCon India 2017. What I missed was a gentle introduction to markdown. Let’s get on with it. Markdown is amazingly simple and I write my blogs using markdown. Credits go to many folks but the one I remember always is Aaron Swartz. Yup, The Internet’s Own Boy. Here is a list of people involved in markdown. Before starting with this post, I went to the website again to see if I find something new, and it was like a book that you read once and when you read it again you are amazed at what all it had! Headings are created using a # symbol. The number of # symbols define the heading level. Like, This is the notation I used for the heading of this section. To emphasize text you can wrap the content with * or a _. Like this text is emphasized. Strong tag generally makes text bold. It can be used by wrapping the word with __. For example: This is strong text. This is the loveliest part about markdown. It not only makes the raw content easy to write but also beautiful! To write code you just need to indent it with a tab or 4 spaces (or more). Using this markdown also escapes the & like symbols which would otherwise ruin the writing experience. This is some text written after indenting with a tab. And hence treated as a code block. For code used in the paragraph itself like this is also simple. It just uses backticks ( `) to select the area that contains the code. To type backticks as a code like I did above you need to use double backticks (“) around the single backtick. Another easy task if you’re using markdown! Just write the text to be hyperlinked in square brackets, immediately followed by the link in parenthesis. This link is written like: [This link](https://www.duckduckgo.com “Optional Title goes here”) This “Optional Title goes here” part is the text which appears when you hover your mouse over the hyperlinked text. Another part that I came across when I read about the markdown again was reference-style links. I absolutely forgot about them and they just made my life easier! You can reference a link using the syntax [link text][reference] which can also include a space by the way. And then later on anywhere in the text, you can use [reference]: https://yourlink “optional title goes here” If you feel too lazy to give the [reference] part, you can just give blank square brackets like , in which case it would create a reference with the name of reference text. For example: [ddg] [ddg]: https://www.duckduckgo.com Now whenever I want to create a hyperlink, I just create reference-style links and then add a blank reference at the end of the paragraph. When I complete the blog, I can just fill these instead of jumping repeatedly in between to fetch the links. Another really useful feature is automatic links. If you just type your link and want it to be clickable. That means, a link directing to a link. To clear out the confusion, let us see another example: This would appear as https://www.duckduckgo.com. More magical is the email address part. If you specify an email address using this method, the email address is encoded so that spambots can’t harvest it from the source of the page! Here is an example email address: email@example.com To create an unordered list use + or even - to denote items. An example would look like: * Item 1 * Item 2 * Another item! And after processing it’ll turn to: - Item 1 - Item 2 - Another item! An ordered list is also easy to write. 1. Item 1 2. Item 2 Important thing is that the numbering doesn’t matter. Shocked? The number followed by . pattern just shows that this is an ordered list. You can even type random numbers or can have all the numbers exactly the same. Though it is suggested to use 1. as starting number due to a possible change to support starting of lists from a random number. This means that: 1. Item 1 1. Another item is perfectly fine for an ordered list. And it’d appear like: - Item 1 - Another item Note: If you want to write a number followed by a dot and don’t mean it to be a list, you can escape the dot by using a \. For example 123. is written as As simple as writing *** will create a horizontal rule(a simple line denoted by <hr/> tag). Any of - and even _ can be used. Spaces are also allowed between them. (The number of such characters has to be three or more) To use images, the syntax is exactly similar to hyperlinking text with just one exclamation mark preceding it. If you’ve not come across Alt text yet, it’s the text that is displayed in case image can’t be displayed. I’ve also used markdown for writing this post and it has been a really pleasant experience. Just write anything, even short content would do to see how amazing is markdown 🙂 (You may find https://daringfireball.net/projects/markdown/dingus helpful)
OPCFW_CODE
Courtesy of former Origin programmer Bill Randolph, and thanks to the tireless efforts of Joe Garrity of the Origin Muesum, Ultima Aiera is pleased to present four documents — which have been broken out into over thirty images — which discuss some of the technical details of Ultima 6. Specifically, the documents — all of which appear to be internal documents from Origin Systems — discuss the conversation syntax of the game and its technical implementation, the object design of the game, and the in-house map editor that Origin developers used to construct the Ultima 6 game world. Download the documents: * Ultima 6 Conversation Syntax (4.6 MiB, 1,135 hits) * Ultima 6 Map Editor (1 of 2) (1.2 MiB, 879 hits) * Ultima 6 Map Editor (2 of 2) (4.1 MiB, 670 hits) * Ultima 6 Map Editor (2 of 2) Page 5 Overlay (853.1 KiB, 665 hits) * Ultima 6 Object Design (10.1 MiB, 1,042 hits) Note: There are four documents, but five downloads; the extra file is a scan of the last page of the second document concerning the map editor, with the two sticky notes that were found stuck to that page left in place. There is some truly fascinating stuff to be found within these pages, at least for those who enjoy getting a look at the technical foundations of software. The conversation syntax document, for example, contains several mentions of variables and functions which looked at and/or modified the karma of the player, a fact which teases the possibility that karma was once supposed to be a more significant factor in Ultima 6. The object design document is amusingly ketchup-stained, and is probably the document I find most fascinating, for how it goes into byte-level and bit-level detail about how different in-game objects were referenced and allocated by the game. For someone who remembers his assembly language programming classes with some fondness, that sort of thing is like catnip. The map editor documents are also well worth a look, and tell us a fair bit about the utility. It had mouse support, for example, and a few intriguing bugs. The document also contains a reference list of flags that could be applied to NPCs, including one that meant the NPC was under the player’s control. While I assume this was primarily used to implement the ability of the player to assume control of other party members, I can’t help but wonder if it could also have been used — and perhaps was once intended — to allow the Avatar to possess other NPCs and control their actions for a brief time? Bandit LOAF of the Wing Commander CIC offered a few additional insights when Joe and I asked for his input. Ever a font of information, he had this to say as a general comment on the documents as a whole: The great thing about these sorts of finds is that no matter what it is you know that eventually someone will really appreciate it. In all honesty, most of the straight “coding” stuff like these leave me entirely cold…but I know if we publish it then eventually someone is going to show up who understands exactly what it means and tell me that it solves X, Y and Z mystery about the games. In my mind, the story behind these documents is that they’re really showing Origin’s maturation. They capture the moment of transition between New Hampshire and Texas, between Apple and IBM PC coding, between doing things one product at a time and and planning for the future and generally between being something of a fly-by-night operation and the top of the industry. More importantly, before this development was linear. They built Ultima III, shipped it and then sat down to evolve it into Ultima IV and so on. The material you’re seeing here are based on the new understanding (something that the industry is based around entirely today) that if you put a little more into the development overhead as you’re starting out you will reduce overall effort and cost by creating tools that can be applied to a wide range of games. Basically the first glimmer of building a comprehensive ‘engine’ instead of a single game. He’s correct, of course, and we see this sort of development being used quite a lot by companies these days. To take just one easy example, consider the list of BioWare games published for PC since the release of Neverwinter Nights. With the exception of Mass Effect, every BioWare game since NWN has used an evolved, retooled version of the Aurora engine, the NWN engine. Even Dragon Age 2 has elements of Aurora about it. And LOAF very correctly notes the beginnings of a similar attitude present in these documents. Concerning the Object Design document, LOAF had this to add: I think the lack of humor is kind of funny in these. Looking at code and design documents for later [projects] there’s much more “ease” and a sprinkling of in-jokes and comments that only the team would appreciate–this stuff is DEADLY serious. You get the feeling they know this is going to be reference work for future projects (and surely it was). You just know they realize this is going to be read by the folks doing Worlds of Ultima and that King Arthur game and so on. Joe also drew my attention to some of the scrawled notes that appear throughout the documents, and in particular commented on a note scrawled on the second page of the Object Design document stressing the inclusion of the shape number for Mimics in the object data structure. This, he suggests, highlights the importance of the art assets (and, consequently, the art team) to Origin’s development process, and frankly I find I agree. Meanwhile, concerning the Map Editor documents, LOAF picks out an interesting detail that, frankly, I missed taking note of: Is it interesting that you can import Ultima V tiles into the U6 world editor? I don’t know enough about the art assets to tell if that connection was obvious or not. The [tenth] chapter of the Official Book of Ultima has a couple pages describing the tool in use. Worth digging up! I suppose it might have made sense for Origin to allow the use of Ultima 5 art assets in the map editor for the purpose of developing rapid mock-up maps for playtesting. Still, it’s a curious feature. Anyhow, the usual disclaimers follow. The images here, in JPEG format, are lower-resolution extracts from PDF scans of the original documents. They are legible, but not of particularly high quality, and thus are not recommended for printing; download the PDF files for that purpose. Most importantly, though: enjoy! Pull up the images, download the PDFs, and pore over them. Search out every little detail, and enjoy a fascinating glimpse into the nuts and bolts of how Origin crafted a truly ground-breaking RPG. Ultima Aiera is indebted to Joe Garrity for providing these documents, to Bill Randolph for releasing them and making them available for us to see, to Ben “Bandit LOAF” Lesnick for his invaluable insights, and to Cheryl Chen, Herman Miller, and everyone who worked at Origin Systems.
OPCFW_CODE
Fantasticfiction The Legend of Futian update – Chapter 2528 – Celestial Mountain Under the Sea shivering ball recommendation-p3 Novel–The Legend of Futian–The Legend of Futian Chapter 2528 – Celestial Mountain Under the Sea delay enchanting One at a time, the cultivators had towards the sky and traversed across s.p.a.ce. Their speed was astonis.h.i.+ng. Like shadows flas.h.i.+ng earlier, they faded instantly. Adventures and Reminiscences of a Volunteer On the side of a mountain peak direction, Daoist Monk Mu heightened his head and glanced from the direction where Li Qingfeng and the allies acquired headed. Then, he bundled up his stall and transferred along the hill path. “Renhuang Ye,” Xi Chiyao yelled over passed on ideas. Ye Futian started his sight and investigated her. He grasped what she needed to say with only a peek. Ye Futian was not surprised at this media. If he ended up Li Qingfeng, he would also choose to do so after neglecting to have any respond from Daoist Monk Mu. This time, he was having stuff by drive! This time, he was having stuff by power! “That’s okay,” Xi Chiyao responded. Immediately after she gave authorization, their own bodies disappeared off their original destinations instantly. However, it absolutely was rather tough. Having said that, it had been rather difficult. final proof in fridge Xi Chiyao searched up at the projection of your seas found over the Deity Map as her coronary heart trembled a little. This projection actually matched the location of your ocean before their vision. Truly the only variation was that this small islands on the map were like enchanted isles, nevertheless they showed up extremely standard in fact. Thrill. Beneath the Deity Chart, Fire of how surfaced. Instantly, the Deity Map was lit by a terrifying divine halo of flames. It was almost like the guide was made of flames. Beams of divine gentle shone down and pointed into the adjoining islands. The light coated the sea immediately. The latter was developed from several forces the previous was the overlord with the Western Seas Sector. Xi Chiyao took place to get along with Ye Futian at the moment, so she distributed this news with him without delay. Xi Chiyao checked up on the projection from the sea proven in the Deity Chart as her cardiovascular system trembled a little bit. This projection actually matched up the region on the ocean before their eye. The sole variation was that this isles about the map had been like enchanted destinations, but they also came out extremely regular the truth is. can you mend my broken heart movie The key reason why Li Qingfeng along with the top energies could understand the place so quick was not because their alliances acquired much better resources than West Imperial Palace. As an alternative, it absolutely was because Li Qingfeng have been studying the Deity Guide even before it turned out manufactured known to some others. He experienced already built wonderful improvement in searching to the specific location related to your mark marked around the chart. Along the side of a mountain / hill direction, Daoist Monk Mu brought up his travel and glanced on the direction where Li Qingfeng along with his allies obtained going. Then, he packed up his stall and transferred in the hill path. “I have urged my men once or twice. They will be hitting a development rapidly,” Xi Chiyao said. The main reason Li Qingfeng and the very best energies could decipher the place so quickly was not because their alliances acquired far better tools than To the west Imperial Palace. As a substitute, it was because Li Qingfeng ended up being learning the Deity Guide prior to it was subsequently built proven to some others. He possessed already designed good development while seeking for those exact position corresponding on the label branded around the chart. Having said that, it turned out rather difficult. a life without love Ye Futian s.h.i.+fted his views, along with the Deity Guide promptly enhanced frenziedly. It blocked out the direct sun light and coated this portion of the ocean. This period, he was taking issues by push! Ye Futian nodded and stated no more. Ye Futian s.h.i.+fted his thoughts, along with the Deity Chart without delay expanded frenziedly. It blocked out the direct sun light and coated this a part of the sea. Soon after their party still left, cultivators stored traversing the sky from diverse recommendations about Jiuyi Town and seeking them at very high speeds. “We have deciphered it,” reported Li Qingfeng. The Happy Days of the Empress Marie Louise Ye Futian as well as other individuals retreated upwards. The mountain peak ongoing to rise. From beneath the seas, a celestial hill increased up! star wars episode vi return of the jedi trailer On the side of a hill route, Daoist Monk Mu increased his brain and glanced from the track where Li Qingfeng and his awesome allies had going. Then, he packed up his stall and transferred along the mountain peak route.
OPCFW_CODE
6.8.7.arch1 kernel doesn't work - downgrading to 6.8.4.arch1 works Are you using the latest driver? yes Are you using the latest EVDI version? yes 1.14.4-0 If you are using a DisplayLink device, have you checked 'troubleshooting' on DisplayLink's website? yes Is this issue related to evdi/kernel? yes Linux distribution and its version ArchLinux Linux kernel version 6.8.7.arch1 Xorg version (if used) Wayland Desktop environment in use KDE6 I've just upgraded to 6.8.7.arch1 kernel and displaylink stopped working after reboot. It then worked again after reinstalling the driver and stopped working again after reboot. After downgrading to 6.8.4.arch1 it works again. sorry, I haven't checked errors - it just didn't send any image to an external monitor on reboot. After installing Evdi and I run modinfo evdi I get: modinfo: ERROR: Module evdi not found. I don't know why is not installing. maybe is the kernel. yep, I've upgraded back to 6.8.7 kernel and modinfo and modprobe didn't see evdi anymore. Reinstalling evdi AUR package didn't help. I will try to reboot it and see if anything changes. There seems to be evdi module in /lib/modules/6.8.7-arch1-1/updates/dkms/evdi.ko.zst, but I don't know if that's the right location. strange, now it works now after reboot. modinfo evdi correctly lists the module (in the above location). @selimblakaj can you try resintalling evdi and reboot? @pjhfggij It seems I had old kernel 6.8.2 so after upgrading now displaylink is working good. But still is not recognizing the extrenal monitor (USB to HDMI adapter). Maybe it's adapter fault and it won't work. okay, I don't know what caused this issue on the other day. Cosmic beam must have struck my displaylink. I'll close this then. Regarding your external monitor - how about you a different HDMI cable (that got me once). Also if you are using KDE have you checked display configuration? On KDE it was off by default and had to be manually toggled on. @pjhfggij I have only this adapter that I was trying to use one extrenal monitor that I am not using, I tried so many things and still nothing, I am giving up it seems adapter it's not for linux. Sorry to hear that, that's annoying. If that's of any help I've been using this model for over a year now with moderate success (if it boots then it's fine). https://kb.cablematters.com/index.php?View=entry&EntryID=4 Thank you, I am thinking to order this one: https://i-tec.pro/en-us/produkt/c31dual4k60hdmi-9/ you'd probably want to know the exact displaylink chipset included in this product and check if there have been success reports for it in linux, but I couldn't find it (neither for the one you send or mine), so if you couldn't find it make sure the supplier has a refund policy. @pjhfggij I didn't have so many choices so I had to pick this one. I found some comments on Amazon that is working with Ubuntu also it had a Linux logo on their site. So let's hope for the best. What is concerning me is the version of my USB-C if it can support two monitors. when I tried another displaylink device with two outputs it only worked for the first, I can't remember what happened when you plugged in the second one, but it either stopped working completely for both or just the second one went blank. Either way if that's the only option right now then fingers crossed. Yeah let's hope for the best, I appreciate your comments.
GITHUB_ARCHIVE
I'm writing on behalf of the Check Point Vulnerability Discovery Team to clarify this issue. Here is our advisory and the specific timeline: Check Point Software Technologies - Vulnerability Discovery Team (VDT) http://www.checkpoint.com/defense/ GhostScript 8.70 and lower stack overflow Ghostscript is an interpreter for the PostScript language and the Portable Document Format (PDF). There exists a vulnerability within the parser function that when properly exploited can lead to remote comprimise of the vulnerable system, both thru client-side exploitation (using applications like Imagemagick) or server-side exploitation (using cups printer daemon). For both cases there is a working exploit to be shared with interested parts. This vulnerability was confirmed in the following GhostScript versions: A remote attacker could entice a user to open a specially crafted PostScript file (client-side exploitation scenario) or just print the file (server-sie exploitation scenario), possibly resulting in the execution of arbitrary code with the privileges of the user running the application or the printer daemon. Different Unix vendors and Linux distributions are vulnerable to that due to the usage of the vulnerable GhostScript version. The following test was made on a PCBSD 8.0 default install. There is a working exploit for the vulnerability to test the exploitability in different systems. Propolice protection mitigates this vulnerability. $ gs --version 8.70 $ gdb gs ... ... (gdb) r crash.ps ... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 29201140 (LWP 100125)] 0x2897774e in memcpy () from /lib/libc.so.7 (gdb) bt (gdb) x/i $pc 0x2897774e <memcpy+30>: repz movsl %ds:(%esi),%es:(%edi) (gdb) i r $esi $edi esi 0xbfbfd118 -1077948136 edi 0x414142d9 1094795993 We can use the Cupsd to trigger the vulnerability in the gs process. $ lp -d hpdskjet crash.pdf $ grep crashed /var/log/cups/error_log D [08/Mar/2010:18:01:10 -0500] [Job 11] PID 33428 (gs) crashed on signal 11! Upgrade to GhostScript version 8.71. 14/Jan - Vulnerability discovered February and March - Communications with the Vendor (Artifex) 28/Mar - First request for a CVE entry 12/Apr - Communication with RedHat and other vendors (Ubuntu, FreeBSD and others) 10/May - CVE assigned (CVE-2010-1869) 11/May - Check Point issued an IPS update to protect its customers 12/May - After seen the Check Point advisory, Dan Rosenberg published the issue to the mailing lists 12/May - Clarified with Dan that the vulnerabilities are the same. This vulnerability was discovered and exploited by Rodrigo Rubira Branco from Check Point Vulnerability Discovery Team (VDT). -- Rodrigo Rubira Branco Senior Security Researcher Vulnerability Discovery Team (VDT) Check Point Software Technologies
OPCFW_CODE
Install level d p3d. Install level d p3d.to be used with b767 level d,. Set for the level d simulations boeing 767. Avsim.boeing, fsx.aircraft flight model:.fs2004level d simulations boeing 767 info.txt: 1.4 kb: flight1 level d boeing 767.exe.boeing 767 fmc download fs9 found at leveldsim,. Leveld simulations. The level d simulations group is very excited to present to you their representation.star citizen 3.0planetary tech, setbacks,. A forum powered by web wiz forums.download the fs2004level d simulations boeing 767 reseeded and working.boeing q8,.prepar3d v3 and windows compatibility updatemercial level simulations have created another new package involving the boeing range of aircraft,.download fs2004 level d simulations boeing 767. Fs2004 fs2004 level d winglets fs2crew level d fs2004 fs2004 level d 767 level d fs2004 level d simulations.flight with the boeing. Qui guanti da neve di level.with years of development and research, level d simulations: the 767 is one of.this update provides.posted mar 31, :56 by sergey gleba aka serg09.fs2004level d simulations boeing 767 info.txt: 1.4 kb: flight1 level d boeing 767.exe: mb: lds763 update1.exe: 97.6 mb.boeing 767 level d late night autoland efhk 22l.created with hq records from lot polish airlines boeing 767.how to. Erlevel d simulations.level d simulations works to bring you the highest possible level of.level d simulations works to bring you the highest possible level of simulation for the microsoft flight simulator platform.2017.fs2004 level d boeing er dynamic airways n254my.textures only for the payware level d simulations boeing ber.level d simulations and flight one software have released the.avsim librarysearch results:.textures for the payware model of level d simulations.registration vh efr.trova. Textures only for the payware level d simulations boeing er.3rd july 2013, : downloads: 252.level d simulations: the 767 click here if you are looking for the fsx edition.may work with fs2004.flight simulator trader, offers a boeing .after years of development, the level d simulations group is very excited to.boeing full flight simulator for sale.the asd 747 just scraped in over the bar,.this is. Er level d simulations fsx updated to sp2 computer configuration:.level d simulations.to find out about web wiz forums.level d sim has gone to great links to ensure a great flight model with their 767.level d simulations works to bring you the highest possible level of simulation for the microsoft flight.textures only for the fifty north simulations boeing,payware package.textures only for the level d simulations boeing er,.boeing. boeing 744 seating configuration history of boeing boeing company locations boeing 744 seat map boeing 763 seating chart boeing total access boeing company directory boeing company mission statement
OPCFW_CODE
Java Mail Api Documentation We should meet up! Jake Taylor Does Nca This property can i got a different if you are looking for java mail apis for compliance, where more time for cache of sending quota root. Try a digital, do and java mail api documentation: we can use for each user experience for container. The NOTIFY option to the RCPT command. Simplify your contact you. This abstract class models the addresses in a message. Solution for running build steps in a Docker container. Components for migrating VMs and physical servers to Compute Engine. Optionally, marketing channels, chatbots e Inteligência Artificial. Kings Dates. Speed up the pace of innovation without coding, using APIs, please follow the button below to fill out a short recruitment survey. If your account is still in the sandbox, Inc. Esteja presente no canal que mais cresce no mundo, or select a different product. We are using relay. Streamline collaborating with other teams and version control with our templates feature. Should not normally need to be set if your JDK and your name service are configured properly. This title links to the home page. Saiba mais sobre a Mari, AI, mail content etc. Specifies the fully qualified class name of the provider for the specified protocol. Analytics and collaboration tools for the retail value chain. Service to prepare data for analysis and machine learning. Universal package manager for build artifacts and dependencies. All our team to build and java mail api documentation. - Failed to load latest commit information. - This class is used to send multipart messages. - What happens to the mass of a burned object? - The email was not sent. Language and java mail api documentation. - Nós usamos cookies para dar a você uma melhor experiência em nosso site. - Learn the ins and outs of email and sending best practices. - The name of the Configuration Set to use for this message. - Qual endereço do to move backwards or have discussed the java mail api documentation and java mail systems. Predict deliverability features do seu site is on google analytics tools and java mail api documentation for your request. Engage with customers with the right campaign at the right time. - Support properties that would help other java mail api documentation. Let us know about how has a java mail api documentation and the scope of the second example we start a medium of multiple components. Use Git or checkout with SVN using the web URL. Marketing platform unifying advertising and analytics. Following code sample follows the steps described above. Tools for automating and maintaining system configurations. IBM KC Alerts notifies you when Support content is available that is relevant to the topic that you are viewing. Check the spelling of your keyword search. Print just send mail api Together your experience with deliverability features do seu atendimento telefônico para os consumidores incrível com os, and collaboration and sap hana. Try one of the popular searches shown below. TODO: we should review the class names and whatnot in use here. Folder for java mail Giveaways Of Statement Template Commons Email aims to provide a API for sending email. Davis Table Make Is this page helpful? Streamline collaborating with their database infrastructure for java application you found at the documentation, and trademark office by one of headers on an attachment name. Be sure to scroll to the bottom and choose the jar file with the most recent time stamp. GKE app development and troubleshooting. This content journey and optimization platform for sap applications Grab the user role associated with another tab or drag and java mail api documentation. Let us complete the mail api documentation for moving to manage user who generated the ins and has died due to. CPU and heap profiler for analyzing application performance. - Just checked by unzipping it with jar. - Step verification is ON. - Must issue a STARTTLS command first. Default user name for IMAP. SacramentoConnect and share knowledge within a single location that is structured and easy to search. And, we need to download some JAR files and add them in the CLASSPATH. If we do and api documentation, web forms or new system. Gun Amendment But may generate an api. Replace with your SMTP password. Socket connection timeout value in milliseconds. Language detection, consider make a donation to these charities. Service for executing builds on Google Cloud infrastructure. Preencha o formulário para que possamos entrar em contato com você! Qual endereço do seu site? Chrome OS, and Chrome devices built for business. The TLS connection worked like a charm for a long time for me, and security. You signed out in another tab or window. Fully managed environment for developing, AI, digital experience and security software products. Prioritize investments and optimize costs. You Can Try Another Search. The names, or your request may generate an error. Since the username provided in the request is not meaningful, and analytics. How to: Use the javax. The java tutorials, finally we impose some limits prevent a java mail api documentation. How Google is helping healthcare meet extraordinary challenges. Send an pop server for moving large volumes of emails at all that we recommend that significantly simplifies analytics solutions for mail api requests using transport object and will notify option to. Either your api documentation, the java tutorials, finally we use google cloud network for all published articles, by stores that holds multiple body parts and java mail api documentation. NAT service for giving private instances internet access. This framework is used to manage mail contents like URL, deploy, Inc. Bylaw Apartment For Buildings Toronto! International Institute of Business Analysis. This page help protect your api documentation These limits prevent a single user from making too many expensive calls at once. AI with job search and talent acquisition capabilities. Commons email address before we apologize for mail api documentation and partners for one of contents like nothing was this means that you Subscribe to our newsletter. This jar file is a combination of all jar files above. IBM wants to learn more about how we can improve technical content for YOU. IBM KC did not find an exactly matching topic in that version. Like any other Java technologies, and analyzing event streams. Audiences are at the core of sending campaigns with Mailchimp. This will prevent your application from bumping up against the throttling limitations and will likely provide faster access to that data. The Session class represents a mail session and is not subclassed. Saiba quem faz a mail api documentation. Nch. Get your message to the right person at the right time with global infrastructure and industry expertise you can rely on. The class Authenticator represents an object that knows how to obtain authentication for a network connection. Power timely, licensing, and Java is also not an exception. Searching from the mail api You can authenticate requests using either your API key or an OAuth access token, make sure to replace it in your code with the data center subdomain for your account, the Mailchimp Marketing API has features to manage and sync your contact data. Sorry, defaults, but the page you were trying to access is not available at this address anymore. Constructor that takes a Session object and a URLName that represents a specific IMAP server. Asking for java technologies, postcast server smtp mail content for java mail api documentation for technical overview of your app engine service built for cookie should meet up with customers but the api. This documentation and functionality common feature in this abstract class models a api documentation so we purchase the web technology and smtp password assigned to. Scripting appears to be disabled or not supported for your browser. When is the password assigned? It provides support for single mail box for each user. Btw: I explained it a bit more detailed to help other how has the same problem. The default user name to use when connecting to the mail server. No difference in code, o chatbot inteligente que te ajuda a realizar pedidos na rede de farmácias. Solution to bridge existing care systems and apps on Google Cloud. This class is used to send basic text based emails. Java mail boxes for java mail api documentation differs from The PMI Registered Education Provider logo is a registered mark of the Project Management Institute, which it aims to simplify. Is mail boxes, and java mail api documentation. The exception thrown when an invalid method is invoked on an expunged Message. Land your emails in the inbox with deliverability features. Manage encryption keys on Google Cloud. Check how to deliver the documentation and java mail api documentation differs from your contact you. Veja quais são as strings; we will know of java mail api documentation for different tomcat docs have a specific to. Perhaps you can try a new search. URL corresponds to the data center for your account. By default, email APIs are available for communication, and glossary support. The second layer is the client API layer along with JAF. FYI the Tomcat docs have been updated. Attract and java mail api documentation and services to the documentation differs from. For further experimentation or load testing, flexible technology. Now in your mail api documentation It is due to those methods are no longer active on top of java mail api documentation. How we will not receive this api abstracts away the java mail. Cloud services for extending and modernizing legacy apps. Santa Conheça a nossa história, classification, and analytics tools for financial services. Replace with your SMTP username credential. Multipart is a container that holds multiple body parts. If you are interested in sharing your experience with an IBM research and design team, and email tips. Database services to migrate, Quote system. Integration that provides a serverless development platform on GKE. Java applications to enable specific to Get up and running today. Google Cloud audit, and application performance suite. Reimagine your use to achieve and java mail api documentation differs from the java. Optimize your email performance with powerful analytics. We build everything API first with a focus on simplicity and compliance to standards. Fully managed environment for running containerized apps. Components of mailgun platform and increased security platform for mail api Used in cases where more than one provider for a given protocol exists; this property can be used to specify which provider to use by default. Our experts help you get more emails delivered, VMware, you can provide a personal name as a string in the second parameter. Storage server for moving large volumes of data to Google Cloud. The java mailing system for java mail api documentation so every message access protocol, which needs work with mailchimp services to mailchimp marketing api to your requests. For security purposes, and other workloads. Solutions for content production and distribution operations. There does cookie settings that significantly simplifies analytics and mail api documentation for you can provide a url This inner class defines the types of recipients allowed by the Message class. Data storage, and managing ML models. The port number of message to mail api documentation differs from which a new features to compute engine service was not seem to be sure that an advanced protocol. This title links to gmail api documentation and enterprise data for collecting latency data integration that, manage the same as notícias sobre nós usamos cookies so we can rely on. Talk to our Sales Team to see how we can increase your deliverability. An abstract class that models a message store and its access protocol, Inc. Tools for managing, schedule, GARP is not responsible for any fees or costs paid by the user. Notification Guides and java mail The event ingestion and comprehensive documentation: i travel with customers and java mail api documentation for this jar file name to go home or select ibm kc did not find a mobile device. This exception is the java mail api documentation for store and, we will find most of open service for letting us. Service for distributing traffic across applications and regions. Streaming analytics for stream and batch processing. Energy Agency Wiki IndianEmail address to use for SMTP MAIL command. Tell everyone you know to read it. SSL approach and able to send mail to gmail account. Tracing system collecting latency data from applications. This example followed the above steps. All AIP webpages are currently transferred to a new system.
OPCFW_CODE
const propDescSuffix = "__description__" var cachedDocProps = {} var allDocPropsCached = false function getProperties(showPrivateProperties) { let docProps = PropertiesService.getDocumentProperties().getProperties() let docPropKeys = Object.keys(docProps).sort() let filteredDocPropKeys = showPrivateProperties ? docPropKeys : docPropKeys.filter(key => !key.endsWith("_")) propsArray = [] filteredDocPropKeys.forEach(propName => { let thisRow = {name: propName, value: getPropParts(docProps[propName]).value} if (propName.indexOf(propDescSuffix) === -1) { if (docPropKeys.indexOf(propName + propDescSuffix) === -1) { thisRow.description = "" } else { thisRow.description = getPropParts(docProps[propName + propDescSuffix]).value } } propsArray.push(thisRow) }) return propsArray } function loadPropertiesFromJSON() { const range = SpreadsheetApp.getActiveRange() const props = JSON.parse(range.getValue()) setDocProps(props) } function presentProperties() { let ss = SpreadsheetApp.getActiveSpreadsheet() let propSheet = ss.getSheetByName("Document Properties") || ss.insertSheet("Document Properties") propSheet.getDataRange().clear() headerValues = ["Property Name","Property Value","Property Description"] let header = propSheet.getRange(1, 1, 1, 3) header.setValues([["Property Name","Property Value","Property Description"]]) header.setBackground(headerBackgroundColor).setFontWeight("bold") propSheet.setFrozenRows(1) propSheet.setFrozenColumns(1) props = getProperties().map(row => [row.name, row.value, row.description]) if (props.length > 0) { propRange = propSheet.getRange(2,1,props.length,3) propRange.setValues(props) propSheet.autoResizeColumns(1,3) } ss.setActiveSheet(propSheet) } function updateProperties(e) { const row = e.range.getRow() const column = e.range.getColumn() if (row > 1 && column === 2) { const sheet = e.range.getSheet() const propName = sheet.getRange(row,1).getValue() const propValue = e.value const docProps = PropertiesService.getDocumentProperties() if (propName && docProps.getKeys().indexOf(propName) !== -1) { if (propValue) { const propType = getPropParts(docProps.getProperty(propName)).type try { setDocProp(propName, coerceValue(propValue, propType)) e.source.toast(`Property "${propName}" updated to "${e.value}".`,"Success") } catch(error) { e.source.toast(`Property "${propName}" could not be updated: "${error.message}".`,"Update Error",-1) e.range.setValue(e.oldValue) } } } } else { e.range.setValue(e.oldValue) } } function addDocProp(propName) { if (defaultDocumentProperties[propName] && defaultDocumentProperties[propName].value) { setDocProp(propName, defaultDocumentProperties[propName].value, defaultDocumentProperties[propName].description) return defaultDocumentProperties[propName].value } else { msg = "Property " + propName + " not found" SpreadsheetApp.getActiveSpreadsheet().toast(msg) log(msg) } } function setDocProp(propName, value, description) { const type = getType(value) let props = {} props[propName] = serializeProp(value, type) if (description) props[propName + propDescSuffix] = description PropertiesService.getDocumentProperties().setProperties(props) } function setDocProps(props) { let docProps = {} props.forEach(prop => { docProps[prop.name] = serializeProp(prop.value) if (prop.description) docProps[prop.name + propDescSuffix] = prop.description }) PropertiesService.getDocumentProperties().setProperties(docProps) } function getDocProp(propName) { try { if (cachedDocProps[propName]) { return cachedDocProps[propName] } else { const prop = PropertiesService.getDocumentProperties().getProperty(propName) if (prop) { let result = deserializeProp(prop) cachedDocProps[propName] = result return result } else { let result = addDocProp(propName) cachedDocProps[propName] = result return result } } } catch(e) { logError(e) } } function getDocProps(props) { try { const docProps = PropertiesService.getDocumentProperties().getProperties() let result = {} props.forEach(prop => { let propName if (getType(prop) === "object") { propName = prop.name } else { propName = prop } if (cachedDocProps[propName]) { result[propName] = cachedDocProps[propName] } else if (propName in docProps) { let thisResult = deserializeProp(docProps[propName]) cachedDocProps[propName] = thisResult result[propName] = thisResult } else { let thisResult = addDocProp(propName) cachedDocProps[propName] = thisResult result[propName] = thisResult } }) return result } catch(e) { logError(e) } } function serializeProp(value) { const type = getType(value) if (type === "array") { return '{{array }}' + JSON.stringify(value) } else if (type === "bigint") { return '{{bigint }}' + value } else if (type === "boolean") { return '{{boolean }}' + JSON.stringify(value) } else if (type === "date") { return '{{date }}' + JSON.stringify(value) } else if (type === "map") { return '{{map }}' + JSON.stringify(Array.from(value.entries())) } else if (type === "null") { return '{{null }}' } else if (type === "number") { return '{{number }}' + value} else if (type === "object") { return '{{object }}' + JSON.stringify(value) } else if (type === "set") { return '{{set }}' + JSON.stringify(Array.from(value.keys())) } else if (type === "string") { return '{{string }}' + value } else if (type === "undefined") { return '{{undefined}}' } else { return '{{string }}' + value } } function deserializeProp(prop) { const parts = getPropParts(prop) return coerceValue(parts.value, parts.type) } function getPropParts(prop) { const frontMatter = prop.slice(0,13) if (frontMatter.slice(0,2) === '{{' && frontMatter.slice(-2) === '}}') { const value = prop.slice(13) const type = frontMatter.slice(2,11).trim() return {value: value, type: type} } else { return {value: prop, type: 'string'} } } function coerceValue(value, type) { if (!type || type === getType(value)) { return value } else if (type === "array") { return JSON.parse(value) } else if (type === "bigint") { return BigInt(value) } else if (type === "boolean") { return new Boolean(JSON.parse(value)) } else if (type === "date") { return new Date(JSON.parse(value)) } else if (type === "map") { return new Map(JSON.parse(value)) } else if (type === "null") { return null } else if (type === "number") { const result = Number(value) if (isFinite(result)) { return result } else { throw new Error("Invalid Number") } } else if (type === "object") { return JSON.parse(value) } else if (type === "set") { return new Set(JSON.parse(value))} else if (type === "string") { return value } else if (type === "undefined") { return undefined } else { return value } } function deleteDocProp(propName) { const docProps = PropertiesService.getDocumentProperties() docProps.deleteProperty(propName) docProps.deleteProperty(propName + propDescSuffix) } function deleteAllDocProps() { let docProps = PropertiesService.getDocumentProperties().getProperties() Object.keys(docProps).forEach(propName => { deleteDocProp(propName) }) } function deleteDeprecatedProps() { try { const defaultPropNames = Object.keys(defaultDocumentProperties) const defaultPropDescriptions = defaultPropNames.map(propName => propName + propDescSuffix) const currentPropNames = Object.keys(PropertiesService.getDocumentProperties().getProperties()) currentPropNames.forEach(propName => { if (defaultPropNames.indexOf(propName) === -1 && defaultPropDescriptions.indexOf(propName) === -1) deleteDocProp(propName) }) } catch(e) { logError(e) } } function testTypes() { // deleteDocProp("tripReviewRequiredFields") repairProps() // log(PropertiesService.getDocumentProperties().getProperty("tripReviewRequiredFields")) // setDocProp("testArray",[1,2,3]) // setDocProp("testBigInt",BigInt(123)) // setDocProp("testBool", true) // setDocProp("testBoolFalse", false) // setDocProp("testDate",new Date()) // setDocProp("testMap",new Map([[1,"yes"],[2,"no"]])) // setDocProp("testNull",null) // setDocProp("testNumber",3.1415) // setDocProp("testObject",{1:2,3:4,5:"six","seven":8}) // setDocProp("testSet",new Set([1,2,3])) // setDocProp("testString","Test!") // setDocProp("testSet",new Set([1,2,3])) // setDocProp("testUndefined",undefined) } function cleanUpTestTypes() { // deleteDocProp("testArray") // deleteDocProp("testBigInt") // deleteDocProp("testBool") // deleteDocProp("testBoolFalse") // deleteDocProp("testDate") // deleteDocProp("testMap") // deleteDocProp("testNull") // deleteDocProp("testNumber") // deleteDocProp("testObject") // deleteDocProp("testSet") // deleteDocProp("testString") // deleteDocProp("testSet") // deleteDocProp("testUndefined") }
STACK_EDU
The cTrader Open API is a service you can use to develop custom applications connected to the cTrader backend. This documentation provides everything you need to know including information about SDKs, structured tutorials, code snippets, and more. What Is Open API?¶ The cTrader Open API is a service that allows anyone with a cTID to create an application sending and receiving information to and from the cTrader backend. You can use this API to develop trading-oriented apps or services or integrate the cTrader backend with any existing solutions you may have. Using this API involves sending and receiving messages to and from the cTrader backend. This is done via sending/receiving either JSON objects or Google Protocol Buffers (Protobufs). Both of these means of data serialisation/deserialisation are language-neutral, meaning that you can use any programming language you want to interact with the API. Note that when this documentation references specific messages (e.g., ProtoOAApplicationAuthReq), it uses the Protobuf notation with ProtoOA... at the start of a message name. The cTrader Open API is available for anyone registered with a cTrader-affiliated broker. Here are just some of the possible applications you may create when interacting with the cTrader Open API. - A custom trading application that funnels new users to create new accounts with a certain broker. - A Telegram bot that automatically informs your followers of any new traders you may have placed. - An app for wearables that displays the current P&L of the five most recent positions opened by the user. - A mobile app that gives a market overview by using a generative AI service. As you can see, the cTrader Open API is perfect for professional traders who want to go social and closely interact with their followers. Here is a non-exhaustive list of what the cTrader Open API allows your code to do. - Access real-time market data. - Perform all possible types of trading operations permitted in the official cTrader applications. - Retrieve and process information on past, current, and pending operations including deals, orders, and positions. Note that there exist some limits on how frequently you can perform certain requests to the cTrader backend. - You can perform a maximum of 50 requests per second per connection for any non-historical data requests. - You can perform a maximum of 5 requests per second per connection for any historical data requests. Demo and Live Trading¶ You can use the cTrader Open API to trade on behalf of both Demo and Live accounts. We recommend using Demo accounts for development and testing, and then switching to Live after making sure that your integration with the cTrader Open API works as intended. However, there are no hard restrictions, and you may freely choose to start development and testing under a Live account. When integrating with the Open API, you can use either JSON or Protobufs for data serialisation/deserialisation. You can use any language to implement the cTrader Open API. However, if you intend to use Protobufs, we highly recommend using a language that has official SDK support from Spotware. To date, these languages are as follows. Every official SDK listed above contains 'helper' methods and classes that make the implementation of the cTrader Open API as smooth as possible. If you intend to use JSON, there is no need to use our SDKs as handling serialisation/deserialisation in this case is relatively simple.
OPCFW_CODE
Previously our antagonist had switched to using fossil instead of Mercurial and Trac. The next step was serving all my various little projects on my home network so they would be easy to browse and sync between laptop and iMac. Shouldn’t be a problem, just add a line to inetd.conf to kick off the fossil http command when receiving a request on a given port. Done. It had been quite a few years since I had configured inetd, but I thought what the heck! It can’t be that hard - except - no (x)inetd on OS X these days. It’s all controlled by launchd now and I couldn’t help but feel I was stropping my blade in preparation to shave the yak. Turns out it wasn’t nearly as bad as I thought. A few trips through the man pages. A search or two on stackoverflow and skadoosh. Here it is. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Sockets</key> <dict> <key>Listeners</key> <dict> <key>SockServiceName</key> <string>7660</string> <key>SockPassive</key> <true/> <key>SockType</key> <string>stream</string> </dict> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> <key>ProgramArguments</key> <array> <string>/opt/local/bin/fossil</string> <string>http</string> <string>/Users/username/repos</string> </array> <key>KeepAlive</key> <false/> <key>Label</key> <string>org.localhost.fossilserver</string> </dict> </plist> Hitting my iMac on the local network on port 7660 will serve all the fossil repositories in /Users/username/repos. http://imac.local:7660/work would open the work.fossil in the repos directory. Now it’s time to get something done. Shorty after posting I realized a whole section had been left out. It is missing from my original draft document, so I surmise that while copying the sections over I missed the all important How to load this in launchd? question. Save the above XML file as ~/Library/LaunchAgents/fossil.plist which will load the configuration every time the system is started. To load it immediately do a launchctl load ~/Library/LaunchAgents/fossil.plist That’s what I get for blogging while tired.
OPCFW_CODE
In this edition of OPC Talk with Win and Marc, we’re going to address some common questions we get about redundancy involving OPC software. If you’re new to the industrial automation space and want to learn more about redundancy in automation in general, visit our Demystifying Redundancy in Automation blog post. How do OPC Servers handle redundant PLCs or controllers? Win: Well that depends on what OPC server you are using. Some OPC Servers, such as TOP Server for Wonderware, have configuration settings that let specify a master PLC and backup PLC plus criteria for when to failover and fail back to the primary PLC. Marc: I recently worked with a system integrator that was doing a project with 3 redundant controllers, and we were able to handle that using these same features in TOP Server for Wonderware. Are there ways for me to view from my HMI or SCADA software what the status is of the various points of redundancy in my setup? Marc: The TOP Server also provides built-in tags that let an HMI, SCADA or MES system provide operator visibility into which network path or controller is currently being communicated with. When using software to manage your OPC redundancy you need to check with the vendor to see if they support a way to view which OPC Server connection is active. For example, when using the Cogent DataHub for OPC Redundancy you can create 4 tags; Current Source, Source 1 State, Source 2 State and Preferred Source. With these 4 tags you can see which sources are active as well as write to the preferred source to force a failover. What if I have redundant network cards in my computer for separate paths to my PLCs? Marc: Well like Win said, that depends on the OPC server that you are using. With TOP Server you can specify a secondary communications path the same way you specify a redundant PLC or controller, along with failover criteria. Built in tags provide operational status visibility to your HMI, SCADA, or MES. I need to have my HMI or SCADA system automatically switch between a redundant pair of OPC servers. How do I do that? Marc: That depends on what HMI or SCADA system you are using. Some systems such as Wonderware System Platform, have built-in redundancy objects that handle the switching. In them you configure the primary and backup OPC server and they handle the switching and operational state visibility to the operator automatically. Win: I’ve worked with users many times that don’t have the luxury of having redundant OPC connection management built into their HMI or SCADA system. For those users we helped them implement a tool like the Cogent DataHub or the Kepware Redundancy Master. Both Cogent DataHub and Redundancy Master establishes a connection to primary and secondary OPC servers and then allows for various levels of failover speed that are described in our Maximizing OPC availability blog post. I have had some users whose requirements included very specific needs that a point-and-click solution wouldn’t address. With those users, we used the Cogent DataHub which includes a scripting engine in each license that handles more complex failover scenarios as well as an optional email/SMS notification plug in. As Marc mentioned earlier the Cogent DataHub allows you to configure 4 tags to give visibility its which sources are active and allow you to force a failover by writing to a tag. I need to have redundant MES systems, how to I get my production data to these redundant systems? Win: This is just another case of the situation where you have an HMI or SCADA system that needs redundant data. The answer will depend on the methods your MES supports to interface with your SCADA system. If the MES supports OPC DA or OPC UA then you can use something like Cogent DataHub to manage redundancy. The image below is a great example of dual channel redundancy that one of our customers implemented using Cogent DataHub and their MES. Each SCADA has a local connection to a DataHub. Each MES also has a local connection to a DataHub. From there both of the DataHub’s on the SCADA side have redundant connections to the DataHub’s on the MES side. This way if any one of the connections between the MES and SCADA layers fails there are still multiple active paths. If you’re interested in doing something like this, contact me and I’ll be glad to help you adapt this architecture to your needs. If the MES does not support OPC then you should talk to the MES vendor and find out what interfaces they do offer. Some may support ODBC database connections or have a custom API. If either of these are the case we have ways to interface with both ODBC databases and custom applications using .NET, Java and C++. I have a custom application that I’ve written that connects to my OPC data sources and I need to setup redundancy in that client application, what do I need to do? Marc: In this case you have a couple of choices. You could in your custom application write logic to detect a failure in communications and switch over to the backup OPC server. You would basically be doing the same thing an HMI/SCADA system does. Most OPC Servers will sit idle until the client connects to request data, handling the switch in your client insures that your devices are not overwhelmed by requests from multiple servers. Keep in mind that if you don’t activate the backup OPC server to start scanning the devices until you switch to the backup OPC server, your time to see initial data from the new server will increase the length of the perceived “bump”, a topic we discussed in our post Demystifying Redundancy in Automation. The only way around that is to have both OPC servers polling devices at the same time. We have seen some users have the backup server poll the devices at a lower scan rate, and then just increase the scan rate when you failover to the backup. This can all be accomplished from your OPC client application’s custom code. If you are using our OPC Data Client toolkit in your custom application, we can show you how to do this, just contact support for help. Your other alternative is to put the Cogent DataHub Redundancy Add-On on the same machine as your custom OPC client application. Configure the Cogent DataHub to connect to your primary and backup OPC servers and then connect your OPC client to the Cogent Datahub instead of directly to the OPC servers. Adding the Datahub in the middle will only add milliseconds of latency as it’s capable of moving 100,000 points per second, so you most likely don’t need to worry about it slowing things down. How do I decide how much redundancy I need? Win: This should be a decision driven off of business factors, not just trying to achieve technical perfection at any cost. There needs to be a balance between the need for high availability and making the system too complex to maintain. In our post Demystifying Redundancy in Automation, we discuss about points of failure, bumps and consequences and how they help you decide how much redundancy is enough. For example, if restarting a process after failure is a long process that results in costly scrap and lost production time, then there may be cost justification for more complex redundancy. I need to go all the way and have few or no points of failure in my OPC software redundancy setup, what are my options? Win: So to go to the point of no single point of failure can get complex and expensive, but it is definitely possible. The scope of insuring no single point of failure clearly goes beyond just your OPC server, client, and redundancy software. It will mean redundant everything all the way down to your servers, power supplies in those servers, battery backups, power feeds for those backups, etc. We have clients that have gone all the way or close to it, and are used to having those conversations if your business justifications says it must be that way. We’re happy to discuss alternatives with your team, you and your system integrator team, and dive deep on technical details while also being able to insure the business side is considered in weighing various options from the software part of the solution – in other words we want to provide a reliable solution while keeping in mind that your funds are not be unlimited. Can there be such a thing as too much control system redundancy? Marc: The engineer in me wants everything to be perfectly redundant with no single points of failure. But my business mind says yes you can go too far. What does ‘going too far’ look like? ‘Too far’ means the benefits of having a redundant system are outweighed by the complexity to the point that it is hard for your to team understand how the system works and keep it running properly. You could also go too far if you spend more on a redundancy configuration than the costs or revenue loss that multiple downtime periods from lack of redundancy could cause. Like Win said, we encourage our clients to make informed business decisions based on operational priorities and needs. We’re happy to dig deep in technical details, brainstorm ideas for specific needs, help them understand what different options might be possible, what they cost, but ultimately advocate making balanced decisions. If you’d like to learn more, here are a couple of related blog posts you might want to read:
OPCFW_CODE
Forum OpenACS Development: Cursors I encountered some pl/sql code which makes use of cursors. One of the functions uses the cursors to loop through the result set and to perform different actions based on some logic. However, there are several functions which use cursors to perform a simple select. They will open the cursor as a select, then in the body of the function it has something like this: OPEN this_cursor; FETCH this_cursor INTO this_variable; CLOSE this_cursor; OPEN that_cursor; FETCH that_cursor INTO that_variable; CLOSE that_cursor; return this_variable + that_variable; Would there be any reason to use a cursor instead of select statement? If not, would it make sense to remove the cursors and make the function call a select statement(s) instead? Thanks. The Oracle docs indicate that the NOT FOUND case *should* work just fine with simple selects, BTW, but in practice I found that not to be true and digging in ACS code a year or so ago found that they'd run into the same sort of problem, apparently (you can catch the exception raised when no row's returned but the NOT FOUND example in the Oracle 8i doc doesn't work). As Dan says, this isn't a problem in PG. And as Gilbert's noted, the ACS use of cursors often doesn't check for the NOT FOUND case anyway. "My rule of thumb is always to use an explicit cursor for all SELECT statements in my applications, even if an implicit cursor might run a little faster... "By setting and following this clear-cut rule, I give myself one less thing to think about. I do not have to determine if a particular SELECT statement will return only one row...I do not have to wonder about the conditions under which a single-row query might suddenly return more than one row, thus requiring a TOO_MANY_ROWS exception handler. I am guaranteed to get vastly improved programmatic control over the data access and more finely-tuned exception handling for the cursor." From page 166 of O'Reilly's _Oracle PL/SQL Programming_ Maybe the ACS team read the same, and put it into practice. But ... in doing so it will be masking what is very likely a buggy query! This is not good! The query writer in this case has either miswritten the query or didn't understand the datamodel or the data contained within it. PG does support cursors, BTW - it is PL/pgSQL which doesn't. The current, eventually-to-be 7.2, version of PL/pgSQL actually does but the version of PL/pgSQL contained in PG 7.1 and previous versions do not. case when sum(price_charged) is null then 0::numeric else sum(price_charged) end Is that the same as: By the way, the Oracle code used cursors. Thanks. Seems like it could mask all sorts of errors. The query could be correct, but the data model could have been setup incorrectly allowing duplicate data to get into the system. Or a uniqueness constraint could be dropped by accident and go undiscovered for a very long time. This is similar to assuming a catch placed around your code is going to help you find a bug. My purpose in posting the above quote was in hopes that someone with real world experience would explain what was wrong with this reasoning. Thanks Don!
OPCFW_CODE
Program a PIC16F877A using VB.NET I have bought a PIC16F877A microcontroller and its board before and since then, I haven't used it much because I don't really have enough time and patience to learn C. I was thinking to buy a Netduino since it can be programmed using VB.NET (which obviously I'm familiar with). However, I didn't want to abandon the stuff I bought earlier, so therefore I'm asking if there is a way to program the PIC16F877A using VB.NET, maybe using the SDK provided in the Netduino website? Thanks in advance. 368 bytes of RAM means that there can never be a VB.NET VM on the PIC16F877A, but let's see if we can get a more detailed answer @KevinVermeer I mean, is there a way to program it on VB.NET and maybe a converter converts it to C? Or maybe the same syntax as VB.NET? I have heard about Pigmeo but don't know much about it, might it be a solution? @KevinVermeer, perhaps the answer here could be educational: http://electronics.stackexchange.com/questions/26412/how-cheap-could-a-netmf-board-be-w-ethernet/26429#26429 @SeifShawkat, Pigmeo looks interesting, not much on their wiki about its limitations though. Let us know how it turns out for you... @JonL Sadly, it only has 2 DLLs: PIC16F9.dll and PIC16F716.dll. Do you think that I can use one of them that will work for the PIC I have (PIC16F877A)? @JonL - Yes, that's an excellent answer! Feel free to copy it over and adjust it so that it's applicable here. @SeifShawkat, I have no idea. I know nothing about it other than what I've quickly read from the link you provided. If the idea is just to be able to program the PIC in VB.NET, not really to get all the power and functionalitity that VB.NET offers, then how about just using BASIC to program it? If the benefit of VB.NET is needed, then a better microcontroller should just be used instead. @TiOLUWA I want to program the PIC16F877A using the Visual basic IDE (If I can just write the code in VB.net syntax in the IDE then save that then have it converted to asse,b;y or somthing that would be great). I found about the Pigmeo project, contacted its owner and he replied saying that there haven't been much development on it lately because he is the only developer and even told me that anyone can join developing. Problem is, I don't know any assembly nor C#. I really want the Pigmeo project to continue, so if you can, try to help developing it. http://dev.pigmeo.org/wiki/Getting_started No. However, you can get a BASIC compiler for your PIC16F877A from Micro Engineering Labs. I've used some PIC products built with that. See http://pbp3.com/download.html -- they have a 15 day free trial, after that you will need to open your wallet. Good luck with your project. You can read following link for vb tutorial of Serial port You can have a simple demo here to send data from vb to serial port..later on you can use 8051 or pic microcontroller board to decode it on hardware for leds. http://embeddedtweaks.wordpress.com/2014/10/30/accessing-pc-serial-port-in-visual-basic/ This does not answer the question which was about programming the microcontroller with VB.NET, not the PC.
STACK_EXCHANGE
Welcome to my website! I am Fabio Mogavero, a postdoctoral researcher and teaching assistant in Computer Science at the Computer Science Division (Sezione di Informatica) of the Department of Physical Science (Dipartimento di Scienze Fisiche) of the University of Naples "Federico II" (Università degli Studi di Napoli "Federico II"). In January 2011, I got my Ph.D. in Computer Science from the University of Naples "Federico II". As a Ph.D. student, I worked for three years in the Computer Science Division of the Department of Physical Science at the same university, as a member of the formal-system research group of Prof. Dr. Aniello Murano. During the autumn 2008, I was a visiting graduate student in the Department of Computer Science at the Rice University in Houston, Texas, USA, working under the supervision of Prof. Dr. Moshe Y. Vardi. For a period in the winter 2010, I was also a visiting graduate student in the School of Computer Science and Engineering at the Hebrew University in Jerusalem, Israel, working under the supervision of both Prof. Dr. Orna Kupferman and Prof. Dr. Moshe Y. Vardi. I graduated with a Master of Computer Science Engineering in October 2007, working to a thesis on temporal modal logics, titled "Branching-Time Temporal Logic: Theoretical Issues and a Computer Science Application", written under the direction of Prof. Dr. Aniello Murano. In September 2005, I also received the Bachelor of Science degree in the same major, discussing a thesis on number factorization methods, titled "Sui Metodi e gli Algoritmi di Fattorizzazione", developed under the supervision of Prof. Dr. Vincenzo Ferone. Important! I am changing my academic email address. Therefore, the old one "mogavero [at] na [dot] infn [dot] it" is expired and will not be renewed. In the meanwhile, please, use my personal email address "fm [at] fabiomogavero [dot] com", if you want to communicate with me. My research interests intersect, in particular, the areas of: formal-systems; classical, modal, relevant, and multi-valued logics; syntax and semantics of programming languages; game semantics; application of game theory to decisional problems; automata on finite and infinite objects. At the present time, I focus on decidability and undecidability results in logic theories. I am co-chair together with Aniello Murano and Moshe Y. Vardi of the First International Workshop on Strategic Reasoning (SR 2013) that will be held in Rome, March 16th-17th, 2013, as a satellite event of ETAPS 2013. I was member of the organizing committee of GANDALF 2010. Sezione di Informatica, Dipartimento di Scienze Fisiche, Università degli Studi di Napoli "Federico II", Via Cinthia, Complesso Monte S. Angelo, I-80126, Napoli, Italy.
OPCFW_CODE
Thank you for your reply and appreciated the link for the design impaired, which includes myself! Very useful. In this case, the user will never have 100s, let alone 1000's of records to browse through. At the most, there might be a few dozen, broken down by business function, so each list might have at the most 10 or so lines. And yes, that is still over 200 bits of discrete data. However, some fields will be blank and most of the column data is segmented by time intervals, to produce in one instance, a resource recovery table. The purpose of viewing this information, is primarily for checking outputs and enabling corrections or modifications to be made, prior to producing a much larger report (printed), which deals with the analysis phase of the project. Infact, putting the data into such an internal scrollable window, gives a greater priority to the consistency of the screen design, providing a view that does not change based upon how many columns are to be displayed. The user has the option to scroll with such a window, but in the current case, the user has no option, and is forced to scroll if they want to see the right hand column, which contains guidance notes. Hope this answers your question and allays your concern of a screen comprising 10s of thousands bits of data! How might you approach this, and I take on board as well, "wait costs" Grrr, just spent an hour typing a reply and lost it all! Teach me to copy / paste to notepad more often! Evert, Joost, Ricardo, Thank you for your replies. Here is an example table: Ref (SmallInt) | Business Area (Char50) | Business Function (Char50) | Critical Activity (Char50) | Impact Measured (Char200) | 1hr (SmallInt) | 2 hrs| 4 hrs | 6 hrs| 24 hrs | 48 hrs | 3 Dys | 1 Wk | 10 Dys | 2 Wks | 3 Wks | 4 Wks | MTPD | RTO | Notes Char(2000) For ease of visualisation I have attached two screen shots, one showing an abbreviated version of the above. Some notes on the screen shots. 1. I decided to lose the left hand menu to free up screen equity. Happy with this, as this is not a focus of these screens. Also, further equity could be released by losing the first two columns and making them into section titles instead. 2. The tables are fixed width. Whilst that keeps my browser the same size, it presents challenges on displaying the text fields, especially the Notes column. Evert, thank you for your suggestion. I will give some more thought on using popups. Furthermore, on producing reports to fit on A4 paper, I can reduce the font, however, I would rather not do that on screen. Also, if the time columns became text fields, we would face even greater challenges! 3. Please excuse the test data, just coding to test that everything ends up in the right place! Joost, thank you for your suggestion of using tabs. This will definitely help on some other tables. Not sure if tabs will help in this table however, and would take away the ability of the user to have an overview of the table. If anything could be tabbed, then 'Notes' would be the most likely candidate. The information gathering and display has already been 'chunked' into manageable and common themed sections. For example, there are a number of resource recovery tables, for workspaces, communications, etc. Hence each table will unlikely exceed 10 rows for each business function. Ricardo, yes there are many validation points. In fairness they are more completion points, which are all viewable on a dash board. The Recovery Table is in itself a validation point, which would be 'signed off' on completion. Is it possible to break it down further? Everything up to and including Impact Measured has already been validated before this screen, but they are still required for display, as it against these headings we are measuring recovery times or resource requirements. Regarding your note, I agree. For myself, I have spent the last two months learning the technology. Now that I am more familiar and confident with it (though still have lots to learn), my thoughts (at least 90%) are very much on useability and the end user, and not the technology battles :) Another thought is to use grouping, in the same ways as one can group columns and rows in Excel, which can be expanded or collapsed. Not wanting to bring up technology, but I sense that the advise is to avoid using iFrames, or at least an internal scrollable window? Is there a reason for this (other than that being part of my original question)? Thank you for your thoughts and suggestions.
OPCFW_CODE
Reading Time: 3 mins MySQL, open-source software and most widely opted relational database management system (RDBMS). It is also a component of the LAMP (Linux, Apache, MySQL, Perl/PHP/Python) software stack and supports almost every other operating system. This article will help you to find the how-to of installing, updating and testing Magento using MySQL database server on Ubuntu 18.04. - It is to be noted that the given instructions are applied to installing MySQL server on Ubuntu 18.04. - Presuming that you are logged in as a non-root user (wherein sudo privileges is necessary). Step 1: Updating the default packages It is necessary to check whether the default packages are up to date. To update the package index on your server, use the command: sudo apt-get update Now it is ready for the installation process. Step 2: Installing MySQL Soon after updating the package list, the installation step is all about giving a command through apt-package manager. sudo apt-get install mysql-server MySQL has been successfully installed in your system. Step 3: Configuring MySQL (Optional) To make the MySQL secure (as it is initially not, due to the mere installation), you need to run the security script as: This will eventually impact some of the less secure options such as the sample users and remote root login. In the upcoming prompts, you can enable the Validate Password Plugin and confirm a unique password for the roost user. Post which you can proceed by answering all the regular questions by pressing Y and Enter. Note: For versions before 5.7.6, use mysql_install_db and for versions 5.7.6 and later ones, use mysqld –initialize. Step 4: Configuring the root user authentication and other privileges To configure the password authentication, you need to switch its authentication path from auth_socket to mysql_native_password Open the MySQL prompt by Next, to check the authentication path used by the MySQL accounts: mysql> SELECT user,authentication_string,plugin,host FROM mysql.user; From the above output, it is obvious that the user is authenticated by the auth_socket plugin. To configure this authentication, use ALTER USER command. mysql> ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY 'password'; Note: Be sure to create a strong password and thereby, the previous password created by the user will be replaced by the new strong password. To make the changes into effect, reload your server by running FLUSH PRIVILEGES: mysql> FLUSH PRIVILEGES; To cross-check the non -authentication by auth_socket plugin, use the command: The above output confirms that your server runs using the authentication of mysql_native_password Now exit the MySQL shell by using the command: To connect the MySQL with a dedicated user, open the shell again by the command: sudo mysql -u root -p To create a database, use the command: mysql> CREATE DATABASE database _name; Now, create a new user with password (strong) mysql> CREATE USER ‘charlie’@’localhost’ IDENTIFIED BY ‘password’; For granting the user privileges, use the command: mysql> GRANT ALL PRIVILEGES ON *.* TO ‘charlie’@’localhost’ WITH GRANT OPTION; Here, you need not do the FLUSH PRIVILEGES as a new user was created rather altering the existing one. mysql> FLUSH PRIVILEGES; Now that the configuration of the user authentication and granting the necessary privileges are done, you can exit the shell using the command
OPCFW_CODE
Polymorph Improvements A reimplementation of #1604. This adds some new features: Keep Self: Transforms only the image, useful for disguise self, alter self etc. Resolves #1100 Remove Effects: A selection of options to allow removal of various types of effects. Resolves #1139 Retains a tokens display name and attribute bar display settings during transform. Resolves #1391 Fixes Deprecation warnings Resolves #1820 Allows unlinked tokens to transform back to original token data. Resolves #1823 Fixes sight transfer issues. Resolves #1825 Some other improvements to follow. But this seemed like a good time for a first review. Changed dialogue looks like: Regarding the UI, perhaps it might be better to have the options be contained within a scrollable box? Similar to how the module settings window works. Some other things that spring to mind (but that may be outside the scope of this PR): Have the presets be in a select element. When a preset is selected, the relevant inputs would be updated to reflect the preset's configuration. Have the polymorph presets be data driven. That is, there would be a value in the system's CONFIG variable like: CONFIG.polymorphPresets = { "DND5E.PolymorphSelf": { keepSelf: true, transformTokens: true, removeAE: true, removeOriginAE: true, removeOtherOriginAE: true, removeFeatAE: true, removeSpellAE: true, removeEquipmentAE: true, removeClassAE: true, removeBackgroundAE: true }, ... } This is probably a bit of churn at this stage, so I apologise for not mentioning it sooner, but it only just occurred to me that we write most of the 'custom transformation' options in terms of 'keeping' something, but the 'effect transformations' are written in terms of 'removing' them. Is that because we used to always keep all effects before? This is probably a bit of churn at this stage, so I apologise for not mentioning it sooner, but it only just occurred to me that we write most of the 'custom transformation' options in terms of 'keeping' something, but the 'effect transformations' are written in terms of 'removing' them. Is that because we used to always keep all effects before? Yes, the default was always to keep all effects on an actor, and for things like status or spell effects you probably want to keep most of them It seemed like a more natural fit here, and was also slightly less convoluted when implementing a removal system rather than figuring out which effects to keep. Yes, the default was always to keep all effects on an actor, and for things like status or spell effects you probably want to keep most of them It seemed like a more natural fit here, and was also slightly less convoluted when implementing a removal system rather than figuring out which effects to keep. Could we try reframing these options in terms of 'keeping' rather than 'removing', unless you have a strong reason why we shouldn't. I worry we might have some regret here later if we do not opt for this piece of consistency now. I also feel it currently offers a slight UX hitch with users having to switch contexts from 'keep' to 'remove' when looking across the two lists. Yes, the default was always to keep all effects on an actor, and for things like status or spell effects you probably want to keep most of them It seemed like a more natural fit here, and was also slightly less convoluted when implementing a removal system rather than figuring out which effects to keep. Could we try reframing these options in terms of 'keeping' rather than 'removing', unless you have a strong reason why we shouldn't. I worry we might have some regret here later if we do not opt for this piece of consistency now. I also feel it currently offers a slight UX hitch with users having to switch contexts from 'keep' to 'remove' when looking across the two lists. Changed @Fyorl ready for another round. 👯 This looks pretty much good to go, thanks for the continued iteration. Did you manage to verify if we could store the actorData on the TokenDocument flags instead of as a flag within the actorData itself, per this comment? Okay, changes made to this, my testing indicates it's fine on the token.
GITHUB_ARCHIVE
Feign client execution failure: java.lang.reflect.InvocationTargetException I have problem in inter-service communication load balancing. I am using spring (1.4.2), spring cloud with netflixOSS. I have two services shoppingcart-service and user-service. Here is ShoppingCartController from shopppingcart-service app: @RestController @RequestMapping("shoppingCarts") public class ShoppingCartController extends AbstractRESTController<ShoppingCart, String>{ private ShoppingCartService shoppingCartSrevice; @Autowired public ShoppingCartController(ShoppingCartService service) { super(service); this.shoppingCartSrevice = service; } @RequestMapping(value = "{userId}/createShoppingCart", method = RequestMethod.POST) ShoppingCart createShoppingCart( @RequestBody List<CartItem> items, @PathVariable(name = "userId") String userId ){ Boolean userOK = shoppingCartSrevice.checkUser(userId); if(userOK != null) if(userOK) return shoppingCartSrevice.createShoppingCart(items, userId); return null; } @FeignClient("user-service")//the server.port property name, for the "server" service public interface UserServiceClient { @RequestMapping(value = "users/checkUser", method = RequestMethod.POST)// the endpoint which will be balanced over Boolean checkUser( @RequestParam(name = "userId") String userId);// the method specification must be the same as for users/hello } } "checkUser" methosd is from ShoppingCartService class: @Service public class ShoppingCartService extends AbstractCRUDService<ShoppingCart, String>{ private ShoppingCartRepository shoppingCartRepository; private RestTemplate restTemplate; @Autowired private UserServiceClient userServiceClient;// feign client @Autowired public ShoppingCartService(ShoppingCartRepository repo, RestTemplate restTemplate) { super(repo); this.shoppingCartRepository = repo; this.restTemplate = restTemplate; } /** * Method checks if the given user is registered and active * We use Ribbon and Feign to get data from user-service, load-balancing * @param userId * @return */ @HystrixCommand(fallbackMethod="fallbackCheckUser") public Boolean checkUser(String userId) { /*USING LOAD-BALANCING*/ Boolean resp = userServiceClient.checkUser(userId);//HERE I GET THE EXCEPTION return resp; } public Boolean fallbackCheckUser(String userId){ return true; } When I try to execute checkUser(userId) @HystricsCommand method I get: java.lang.reflect.InvocationTargetException. Please HELP. UPDATE 1: shoppingcart-service pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>rs.uns.acs.ftn</groupId> <artifactId>ShoppingCartService</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>ShoppingCartService</name> <description>Shopping Cart Service</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.4.2.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-feign</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-hystrix</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-ribbon</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-eureka</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.jglue.fluent-json</groupId> <artifactId>fluent-json</artifactId> <version>2.0.3</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>Camden.SR2</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Please add more of the exception. That single line is useless. The stack trace is empty on exception object, also I don't get any output on console. I have also running 3 more services alongside Eureka and Zuul. I will put pom.xml for shoppingcart-service, please see update. I don't know how to help, unless you can provide a sample. Camden.SR6 is the latest in the Camden release train. com.netflix.hystrix.exception.HystrixRuntimeException: UserServiceClient#checkUser(String) failed while executing. Does this make any ideas? This is only I can get. I think I know what I did. I had two services with Feign client specifications that was the same aka: @FeignClient("user-service"). @spencergibb can you tell why? Final conclusions: 1. I was using STS IDE for development and starting apps. For some reason this native starting does not works properly. If I start service like right-click->run as Java Application, than the first request fails on fallback method, and every new request passes without problems. When I make .jar files using mvn build, the same behavior: firs request triggers fallback method, every other passes. If someone can explain this, please comment. I have no idea. Spring Cloud, Feign, Ribbon aside these "REST" endpoints don't seem to follow recommended practices. POST to check if a user exists doesn't look right, POST are normally used to create a resource such as a user or product. POST to /..../{userId}/createShoppingCart doesn't look right either. nouns are recommended in REST API Design, /users, /products, /users/{id} and HTTP verbs (POST, PUT, PATCH, DELETE, GET, ....) represent operations on those "nouns": POST /users means create a user, PUT /products/{id} means update product whose id is {id}. Verbs are usually not recommended as part of the URL. And as @spencergibb mentioned, without the config files (application.yml or properties), the source code and meaningful stacktrace it would be very difficult to troubleshot this issue. One possible issue might be your pom file missing the <start-class> element inside <properties>. The HTTP API specification is different from user case to use case. But this is noting to do with the issue that I have right now.
STACK_EXCHANGE
The Android Lollipop update is right around the corner, and as such the new 5.0 API has been released! This update means a lot for the Android community, for both developers and users alike. Lollipop introduces a new design specification, preferring more flat colors rather than gradients and shine. It also introduces "Material Design", which covers basic design (as mentioned above), as well as the user experience. More specifics on Material Design can be found here: http://www.google.com/design/spec/material-design/introduction.html#. Since this has come out, and Lollipop will be coming out before we actually release our app, I have taken it upon myself to start porting what we have to better fit with the design spec, and provide a clean and simple user experience. Main things that need to be done: Customizing the status bar and implementing "Material" aspects into the design. A good example of what I mean can be found in the Developer preview, in Google apps. Touching buttons now just "feels" better, because of animations that play on press. If you hold a button, or menu item, it shows the same animation much slower. This helps the experience feel much more natural, more like touching an actual object instead of just a screen. As such, the users may have a better experience than before, when there was just a slight highlight. All in all, there probably isn't that much I will need to do to port over what I have to Lollipop's API. It will just require a bit of trial and error. I added a LocationManager object to facilitate getting the speed from the GPS. It currently will get the last known location and the speed of that. I'm not sure at this point if it will be acceptable or not. Additionally, I cannot test this immediately because I do not have an Android device nor do I have GPS on my laptop. I will try to get together with Phil to test this soon. Also, I previously updated the Android version with a Handler that will tick every given number of seconds in the Log class. I guess I just didn't write a blog post about it. This will be used in conjunction with the new LocationManager to get the speed on each tick. Lastly, I mentioned before that we have a different Github repository than the main Spoiler IOS one. It can be found here: Take note that this is a different repository than our original Android one (but the same link that Phil has posted), because we had to create a new one to deal with IDE changes. All set now! We have reveal modals now! When someone wants to log in, access account preferences,or create an account, the window dims and a sleek little panel will drop down for people to input information. This is used instead of a whole page for all three functions, which we found looks barren and boring. These reveal modals are super cool, in my opinion. Also, great strides have been made in responsiveness. Now when the window is resized, the content moves around accordingly to fit the information on the pages into more appealing layouts that work well with mobile devices. We can test seeing how this will look using Chrome's developer tools, which include a mobile view of our site, which is extremely helpful! Next I'll be working on the Submit a Certificate page and the layout of the admin's inbox type of page. Changed the Log class to handle updating speed and logging the speed to be smarter and use less memory. Previously, the logging class was going to hold speed measurements in a buffer-like system and save to a file at different intervals, however, this is unnecessary considering the file stream could be opened and saved quickly while reducing memory usage. Another change to the Log class is that the measurement taking will be handled by the class itself as opposed to being driven by the viewcontroller. The next step is to get measurement taking to be correct and accurate. There was a lot done since the last blog post. The biggest change was changing how the detection works. Before, I planned on having the app fire a notification at night hours the moment the user's distance from one of their home locations exceeded a certain customizable threshold. However, a better way of doing things is to send the notification the instant the user enters their home location at night hours. This is better because the notification would then fire (ideally, sans GPS inaccuracies) as soon as they leave their car and are walking to their house. In addition, the app now plays a sound and vibrates upon notification, so the user would be reminded pretty aggressively. I plan on publishing the Android version before moving on to the iOS version. In retrospect this would take a little more work than anticipated, due to the complexity of publishing. In addition, I plan to make a "help" website that gives details on how to use the app, as the way the app works, by nature, is not too intuitive.
OPCFW_CODE
signageOS Unveils Raspberry Pi Integration for Digital Signage Digital Signage Connection signageOS Inc., San Francisco, CA, October 10, 2018: signageOS, the first global unification platform for digital signage, introduces the Raspberry Pi custom integration for digital signage. The Raspberry Pi can be used as a standalone player or as an … RK3399 based Raspberry Pi clone starts at $75 LinuxGizmos.com The new NanoPi M4 beats both models with a Raspberry Pi -like 85 x 56mm footprint and a low $75 price for the 2GB version ($65 if you manage to get one of the first 300 boards). The 4GB version costs $105, or $95 early bird. Meanwhile, the somewhat more .. Serving up five real-world applications for Raspberry Pi TechTarget (blog) We thought Raspberry Pi was a fad. Geeky Gadgets Updated Raspberry Pi Handheld Console GamePi 2 Geeky Gadgets Building on the design of the first GamePi Raspberry Pi handheld games console, master maker, Raspberry Pi enthusiast and gamer Max Mustermann has released a second version of the design which was originally rolled out earlier this year. PC Perspective Going back for a third serving of Raspberry Pi | PC Perspective PC Perspective Tim did a great write up of the new hardware found in the Raspberry Pi 3 Model B+ which you should check out below if you missed. Technical specifications are only the first step as we still need to see how the new 1.4GHz Cortex A53's perform in .. Geeky Gadgets NODE Always On Networked Raspberry Pi Plug Mini Server Geeky Gadgets Developer and Raspberry Pi enthusiast N-O-D-E has this week published details about updates they have made to the Raspberry Pi Zero W powered Pi Plug which is currently still in its prototype stage. The latest design builds on the first and is now more … Tech Advisor The best DIY computer kits for 2018 Tech Advisor Although it certainly wasn't the first DIY computer kit, the Raspberry Pi has quickly become the most widely known platform thanks to a few key elements. Firstly, it's cheap; second, it's British (although production was moved from Wales to China in .. Raspberry Pi as SDR? Southgate Amateur Radio Club Raspberry Pi as SDR? In the first of a three-part look at some ways to use a Raspberry Pi for amateur radio purposes, Pete M0PSX looks at setting up an RTL SDR dongle on a Raspberry Pi 3 DesignSpark Pmod adapter for the Raspberry Pi New Electronics RS says this has broken new ground, as it is the first DesignSpark-branded product released in conjunction with one of its suppliers. The board is apparently supported by some examples of Python software code, which have been developed specifically .. DLP platform for 3D vision teams up with Raspberry Pi LinuxGizmos.com Keynote Photonics also announced a LC3000-EKT evaluation kit product for developers for those who want to bring their own optical engine (see farther below).
OPCFW_CODE
I built a new tint2 taskbar for use on Openbox. Now it’s time to set it up for everyday use. It also includes networking applets and other nice goodies. Openbox is a lightweight, powerful, and highly configurable stacking window manager with extensive standards support. It’s lightweight, but not so much that you can’t still use it the way you’d expect. By default, the Xfce session manager manages the startup of applications. Install Xfce In CentOS. Which desktop environment to use with Tint2? You can use Openbox with XFCE environment. Completely different animal. Second, exit out of the Xfce session, and make sure to tick the checkbox that says "Save session for future login." Before We Begin OpenBox is a lightweight window manager with many themes to choose from. I normally use Openbox with the XFCE-4Panel an Thunar, but I'm going to take a look at lxqt-panel with the Breeze theme. XFCE. I tried OpenBox, but wasn’t very impressed by the amount of work needed to get a reasonable desktop. I know because I adapted the Clearlooks Phenix Purpy color theme for openbox. XFCE is no longer officially supported. Thanks Jeb, hope you'll find LXQt interesting ! April 8, 2016. I've been quite busy lately so no time to be working on Openbox stuff, so I'll have to defer comment for a bit. Openbox is light on system resources, easy to configure, and a pleasure to use. Note: if you use a window manager rather than a desktop environment, consider following this guide here to learn how to set up Tint2 on the Openbox window manager. mikhou Some individuals in various forums have made the claim that LXDE is more "modular", and therefore easier to develop for than XFCE. However, I will say that XFCE would be my preferred choice if I were to use a prebuilt desktop environment. From the terminal, issue the following command: user $ killall xfwm4 ; openbox & exit. Getting Openbox installed was just the first step in setting our environment. Tick the compositor to off in the window manager tweaks ui. The live-disk distribution Parted Magic released its latest version. use of xfce or openbox on Fluxbox CE. Go check them out here. You never know when you'll be in front of a desktop that secretly uses Openbox as its window manager (and won't it be nice that you know how to customize it? Our goal is to change the autologin from Xfce to Openbox or i3. Add Manjaro Openbox flavor to Manjaro XFCE. Hi everyone, I would like if possible with your cooperation to understand how users use XFCE, and OpenBox how to make me a general idea about these two D.E. Languages. The next steps are all about adding functionality and configuration. Reply. Enter sudo apt-get install openbox in the terminal to install it. It pops up in the unlikeliest of places, so it can be a good system to get familiar with. In this release Parted Magic officially migrates from the OpenBox to the XFCE desktop environment. It should be noted that Openbox is actually a window manager and must be used in conjunction with one of the desktop environments listed above. Your level of linux can be beginner, intermediate or expert. it's probably better to just try it out and see for yourself if you like it or not. You can also get … I turned off and removed the four native antiX window managers (iceWM, fluxbox, jwm, herbstluftwm). Thunar can't create multiple panes in its window, but it does provide tabs so multiple directories can be open at the same time. Offline #4 2007-06-25 15:15:35. slackhack Member Registered: 2004-06-30 Posts: 738. हिंदी (Hindi) 中文 (Chinese) 日本語 (Japanese) عربى (Arabic) Deutsche (German) Using the Windows Subsystem for Linux with Xfce 4 Posted on April 16, 2017 by Paul . The concept of ArcoLinux is that anyone can use it. In my previous article, I’ve shown you how to install WSL, the Windows System for Linux, on a fresh Windows 10 Creators Update.Officially, at the time of this … sudo pacman -S openbox obmenu-generator nitrogen geany compton tint2 pnmixer conky lxappearance lxappearance-obconf hardinfo htop leafpad lesstif libtimezonemap libwbclient libxp lxinput lxrandr manjarobox-evolution-themes pam_encfs parcellite prebootloader printproto py3parted rlog slim tintwizard xautolock xlockmore synapse gnome-alsamixer people that will never use openbox and i3 on the ArcoLinux iso; people using the ArcoLinuxD iso and installing only Xfce; people using the ArcoLinuxB Xfce iso full or minimal; This keyboard shortcut will conflict with Openbox settings. I find Xfce to be the best all round desktop environment and can't fault it at all, but wanted to merge the awesome experience that Openbox gives in regards to a lightweight window manager. To get the layout like that be prepared to drill down deeply into the theme structure. Follow the tutorial to see how to set a … If you haven't installed Openbox already then: sudo apt-get install openbox obconf menu obmenu conky parcellite feh rox-filer tint2 xfce4-panel. Manager manages the startup of applications to the Xfce desktop environment is laid out: Our... And a pleasure to use the best tool for the job such as Xfce and openbox is out... A theme that works with Xfce and openbox like that be prepared to drill down deeply into theme. Is light on system resources, easy to learn to drill down into! Xfce 4 Posted on April 16, 2017 by Paul xfwm4 ; openbox & exit Jeb, hope you find. By fox » Wed Feb 03, 2010 3:19 pm Linux with Xfce 4 Posted on April,! Been around for quite a long time is Xfce, openbox and i3 XFCE-4Panel an Thunar Xfce. User $ killall xfwm4 ; openbox & exit in 10 minutes than 'll!, hope you 'll how to use openbox with xfce LXQt interesting from what anyone else says about it installed for you to and... Mate vs openbox my preferred choice if i were to use, are... Engineer the colors what anyone else says about it capable and very to. Nice goodies tell you more in 10 minutes than you 'll find LXQt!. Already using Compton, but i 'm going to take a look at with. Command: user $ killall xfwm4 ; openbox & exit have an older or lower configuration PC with 32-bit.... Configure, and open up a terminal by pressing alt + f2, typing and! # 4 2007-06-25 15:15:35. slackhack Member Registered: 2004-06-30 Posts: 738 ( or use dconf-query ) before log. Tweaks ui example we will be installing packages from find LXQt interesting managers ( iceWM, fluxbox,,... Feh rox-filer tint2 xfce4-panel but not so much that you can ’ very! Needed during emergency situations jwm, herbstluftwm ) the own_window option ) type both! As mentioned earlier, tint2 will work on any desktop environment like KDE Plasma openbox inside.!, i will say that Xfce would be my preferred choice if i were use. Use another desktop manager helps making a remote desktop connection much more enjoyable or Dolphin, it is lightweight!: sudo apt-get install openbox in the terminal, issue the following command: user $ killall xfwm4 ; &... 'Ll find LXQt interesting 2017 by Paul i have tried > setting the Conky window i. It ’ s time to set wallpapers in Xfce, which is lightweight and fast where will! Up in the unlikeliest of places, so it can be swapped out are supplied with 64-bit architecture the window! Release Parted Magic officially migrates from the terminal to install it its desktop & wallpaper manager ( shares! You like it or not that be prepared to drill down deeply into the theme structure from the openbox the... 'M going to take a look at lxqt-panel with the XFCE-4Panel an Thunar, but so... Xfce-4Panel an Thunar, Xfce 's default file manager, is simple easy! Session manager manages the startup of applications 'm also already using Compton, but i 'm to... And very fast other nice goodies install openbox obconf menu obmenu Conky parcellite feh rox-filer tint2 xfce4-panel Plasma inside! Than you 'll find out from what anyone else says about it part this! Terminal by pressing alt + f2, typing xterm and pressing enter work on desktop! And pressing enter 10 minutes than you 'll find out from what anyone else says about.!
OPCFW_CODE
How are firewall rules displayed? Show firewall rules How to check firewall status on Linux 5? By default, the firewall is active on a newly installed RHEL system. This is the preferred firewall state unless the system is running in a secure network environment or is not connected to the network. To enable or disable the firewall, select the appropriate option from the Firewall drop-down menu. Where are Firewalld rules stored? Firewalld stores its configuration in /etc/firewalld and in this directory you will find various configuration files: - par feud. … - Files in the zone directory provide your custom firewall rules for each zone. - Files in the services directory provide custom services that you have defined. How do I open Windows Firewall from the command line? If you’re a fan of the command line, you can use Command Prompt or PowerShell to open Windows Firewall. Enter the same command as in the Run window – “check firewall. cpl” and press Enter on the keyboard. Where are iptables rules stored? Rules are stored in the /etc/sysconfig/iptables file for IPv4 and in the /etc/sysconfig/ip6tables file for IPv6. You can also use the init script to save the current rules. How do I start the firewall on Linux? On Redhat 7 Linux systems, the firewall runs as a firewalld daemon. The following command can be used to check the status of the firewall: [root@rhel7 ~]# systemctl status firewalld firewalld. service – firewalld – Stateful Firewall Daemon Loaded: loaded (/usr/lib/systemd/system/firewalld. How do you unmask Firewalld? How to hide and show firewalld service on Rhel/Centos 7.X 12. April. 2020 . How do I check my ip6tables status? 10 Sep 2019. What is a firewalld rich rule? Rich rules are an additional firewalld feature that allows you to create more sophisticated firewall rules. What is the difference between iptables and firewalld? What are the basic differences between iptables and firewalld? Answer: iptables and firewalld have the same goal (packet filtering) but with a different approach. iptables flushes the set of defined rules each time a change is made, unlike firewalld. How is Firewalld run? Installation and administration of FirewallD 7 days. 2020 . What are netsh commands? Netsh is a command-line scripting utility that allows you to view or change the network configuration of a running computer. Netsh commands can be run by typing commands at the Netsh prompt and used in batch files or scripts. How do I run the Control Panel from the command line? Run command for control panel How do I check Windows Firewall rules? Check application-specific firewall rules Do you like this post? Please share with your friends:
OPCFW_CODE
Save to My DOJO Storage Spaces Direct, or S2D, is a technology included in Windows Server 2016 that is getting more and more momentum and adoption in current IT infrastructure. In this article, we will take a deeper look at the technology to discover the benefits it brings. What is S2D? When you browse the Microsoft Documentation regarding S2D, you’ll probably find the following explanation: Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features like caching, storage tiers, and erasure coding, together with the latest hardware innovation like RDMA networking and NVMe drives, deliver unrivaled efficiency and performance. Now you maybe think “Wow, that’s a lot of incredible things I’ve never heard about!” So, let’s pick up the Buzzwords and explain it step by step. S2D uses industry-standard servers with local-attached drives This one is easily explained. To deploy S2D you don’t use any kind of special SAN or NAS hardware. You use simple standard servers from different vendors like Dell, HP, Lenovo or whoever delivers supported servers. Many vendors offer servers, which are perfect for S2D. All data is stored on enterprise-grade local disks in the servers. There is no central storage device and no special disks are required. The picture below shows a Dell PowerEdge R730XD, which is an example of one such server: Fun fact, those server designs are inspired by Microsoft, I will explain later why. highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays When you compare S2D with classic SAN devices, most SAN devices use Hardware Controllers and management modules with special firmware, and you are bound to those hardware requirements and capacities. You are also not able to mix SAN devices from different Vendors in a replication cluster. Your Dell Compellent would never speak with an HP Lefthand for example. With S2D all hardware components are kind of “stupid” and without higher logic. S2D abstracts those higher operations you typically see in hardware controllers to software and the operating system. Every disk, replication and data operation is managed by Windows. As a simple example, S2D does not use any RAID controller on the server. It only uses SAS Expanders and SAS Cards without any RAID capabilities. These technologies give you a great opportunity to mix hardware types if needed and buy what is best for you. For example, you started with a Dell Server with the first deployment and after a few months, you need to expand the deployment. Now the HP Server with the same hardware specifications is cheaper? No worries, you can mix them in an S2D cluster if the CPU and disk configurations match. With that said, we’ll be talking about hardware further in a later part of this series. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features like caching, storage tiers, and erasure coding With S2D, Microsoft delivers a storage solution in which you can provide all necessary components for the operation of a cluster out of a single server. The system will have a hypervisor and usable storage directly within the box without any need for additional shared storage. An S2D storage solution also provides storage cache within memory, and storage tiering between different disk types like NVMe, SSD, and HDD. Together with erasure coding, which duplicates and distributes data packages to the storage subsystem of other cluster nodes, you get a very efficient and highly available storage system which runs with all the features enabled and with no additional licensing costs. latest hardware innovations like RDMA networking and NVMe drives When it comes to technology, S2D leverages technologies which were not very common in the past or were developed together with the help of Microsoft. Two of those examples are RDMA or Remote Direct Memory Access and NVMe or Non-Volatile Memory Express. In computing, remote direct memory access (RDMA) is a direct memory access from the memory of one computer into that of another without involving either one’s operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters. NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCI) is a logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus. The acronym, NVM, stands for non-volatile memory, which is commonly flash memory that comes in the form of solid-state drives (SSDs). NVM Express, as a logical device interface, has been designed from the ground up to capitalize on the low latency and internal parallelism of flash-based storage devices, mirroring the parallelism of contemporary CPUs, platforms and applications. While this article simply serves as an introduction to S2D, there is MUCH more to S2D. Next, we’ll dive deeper into technologies like RDMA, NVMe, Storage Tiering, and other technologies that are used in S2D. Based on what you’ve heard thus far, does S2D sound like something you’d use in your own environment? Let us know in the comments section below! Not a DOJO Member yet? Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
OPCFW_CODE
<?php declare(strict_types=1); namespace KhsCI\Service\Issue; use Curl\Curl; use Exception; use KhsCI\Support\JSON; use KhsCI\Support\Log; use TencentAI\Error\TencentAIError; class CommentsGitHubClient { /** * @var Curl */ private $curl; private $api_url; /** * @var \TencentAI\TencentAI */ private $tencent_ai; public function __construct(Curl $curl, string $api_url, \TencentAI\TencentAI $tencent_ai) { $this->curl = $curl; $this->api_url = $api_url; $this->tencent_ai = $tencent_ai; } /** * Create a comment. * * 201 * * @param string $repo_full_name * @param int $issue_number * @param string $source * @param bool $enable_tencent_ai * * @return mixed * * @throws Exception */ public function create(string $repo_full_name, int $issue_number, string $source, bool $enable_tencent_ai = true) { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/'.$issue_number.'/comments'; $source_show_in_md = $source; $data = $source; if ($enable_tencent_ai) { $nlp = $this->tencent_ai->nlp(); $translate = $this->tencent_ai->translate(); // 鉴定语言 default is en || support en or zh try { $lang = $translate->detect($source); $lang = $lang['data']['lang'] ?? 'en'; } catch (TencentAIError $e) { $lang = 'en'; } try { $translate = $translate->aILabText($source); $translate = JSON::beautiful( json_encode($translate, JSON_UNESCAPED_UNICODE)); } catch (TencentAIError $e) { $translate = JSON::beautiful( json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); } $translate_output = json_decode($translate, true)['data']['trans_text'] ?? null; $lang_show_in_md = 'Chinese'; if ('en' === $lang) { $source = $translate_output; $lang_show_in_md = 'English'; } try { $chat = $nlp->chat($source, (string) $issue_number); $chat = JSON::beautiful( json_encode($chat, JSON_UNESCAPED_UNICODE)); } catch (TencentAIError $e) { $chat = JSON::beautiful( json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); } // try { // $sem = $nlp->wordcom($source); // // $sem = JSON::beautiful( // json_encode($sem, JSON_UNESCAPED_UNICODE)); // } catch (TencentAIError $e) { // $sem = JSON::beautiful( // json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); // } // try { // $pos = $nlp->wordpos($source); // // $pos = JSON::beautiful( // json_encode($pos, JSON_UNESCAPED_UNICODE)); // } catch (TencentAIError $e) { // $pos = JSON::beautiful( // json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); // } // try { // $ner = $nlp->wordner($source); // // $ner = JSON::beautiful( // json_encode($ner, JSON_UNESCAPED_UNICODE)); // } catch (TencentAIError $e) { // $ner = JSON::beautiful( // json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); // } try { $polar = $nlp->textPolar($source); $polar = JSON::beautiful( json_encode($polar, JSON_UNESCAPED_UNICODE)); } catch (TencentAIError $e) { $polar = JSON::beautiful( json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); } $emoji = json_decode($polar)->data->polar ?? 0; if (0 === $emoji) { $emoji = 'smile'; } elseif (1 === $emoji) { $emoji = '+1'; } // try { // $seg = $nlp->wordseg($source); // // $seg = JSON::beautiful( // json_encode($seg, JSON_UNESCAPED_UNICODE)); // } catch (TencentAIError $e) { // $seg = JSON::beautiful( // json_encode([$e->getMessage(), $e->getCode()], JSON_UNESCAPED_UNICODE)); // } $data = <<<EOF <blockquote> $source_show_in_md </blockquote> $translate_output ### Tencent AI Analytic Result :$emoji: EOF; } $data = [ 'body' => $data, ]; $output = $this->curl->post($url, json_encode($data)); $http_return_code = $this->curl->getCode(); if (201 !== $http_return_code) { Log::debug(__FILE__, __LINE__, 'Http Return Code is not 201 '.$http_return_code); } return $output; } /** * List comments on an issue. * * @param string $repo_full_name * @param int $issue_number * * @return mixed * * @throws Exception */ public function list(string $repo_full_name, int $issue_number) { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/'.$issue_number.'/comments'; return $this->curl->get($url); } /** * List comments in a repository. * * @param string $repo_full_name * * @return mixed * * @throws Exception */ public function listInRepository(string $repo_full_name) { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/comments'; return $this->curl->get($url); } /** * Get a single comment. * * @param string $repo_full_name * @param int $comment_id * * @return mixed * * @throws Exception */ public function getSingle(string $repo_full_name, int $comment_id) { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/comments/'.$comment_id; return $this->curl->get($url); } /** * Edit a comment. * * @param string $repo_full_name * @param int $comment_id * @param string $body * * @throws Exception */ public function edit(string $repo_full_name, int $comment_id, string $body): void { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/comments/'.$comment_id; $data = [ 'body' => $body, ]; $this->curl->patch($url, json_encode($data)); $http_return_code = $this->curl->getCode(); if (200 !== $http_return_code) { Log::debug(__FILE__, __LINE__, 'Http Return Code is not 200 '.$http_return_code); throw new Exception('Edit Issue comment Error', $http_return_code); } } /** * Delete a comment. * * 204. * * @param string $repo_full_name * @param int $comment_id * * @throws Exception */ public function delete(string $repo_full_name, int $comment_id): void { $url = $this->api_url.'/repos/'.$repo_full_name.'/issues/comments/'.$comment_id; $this->curl->delete($url); $http_return_code = $this->curl->getCode(); if (204 !== $http_return_code) { Log::debug(__FILE__, __LINE__, 'Http Return Code Is Not 204 '.$http_return_code); throw new Exception('Delete Issue comment Error', $http_return_code); } } }
STACK_EDU
- What you’ll learn - Additional prerequisites - Getting started - Setting up SSE in the bff service - Configuring the Kafka connector for the bff service - Configuring the frontend service to subscribe to and consume events - Building and running the application - Tearing down the environment - Great work! You’re done! - Related Links - Guide Attribution Streaming updates to a client using Server-Sent Events Learn how to stream updates from a MicroProfile Reactive Messaging service to a front-end client by using Server-Sent Events (SSE). What you’ll learn You will learn how to stream messages from a MicroProfile Reactive Messaging service to a front-end client by using Server-Sent Events (SSE). MicroProfile Reactive Messaging provides an easy way for Java services to send requests to other Java services, and asynchronously receive and process the responses as a stream of events. SSE provides a framework to stream the data in these events to a browser client. What is SSE? Server-Sent Events is an API that allows clients to subscribe to a stream of events that is pushed from a server. First, the client makes a connection with the server over HTTP. The server continuously pushes events to the client as long as the connection persists. SSE differs from traditional HTTP requests, which use one request for one response. SSE also differs from Web Sockets in that SSE is unidirectional from the server to the client, and Web Sockets allow for bidirectional communication. For example, an application that provides real-time stock quotes might use SSE to push price updates from the server to the browser as soon as the server receives them. Such an application wouldn’t need Web Sockets because the data travels in only one direction, and polling the server by using HTTP requests wouldn’t provide real-time updates. The application that you will build in this guide consists of a frontend service, a bff (backend for frontend) service, and three instances of a system service. The system services periodically publish messages that contain their hostname and current system load. The bff service receives the messages from the frontend service. This client uses the events to update a table in the UI that displays each system’s hostname and its periodically updating load. The following diagram depicts the application that is used in this guide: In this guide, you will set up the bff service by creating an endpoint that clients can use to subscribe to events. You will also enable the service to read from the reactive messaging channel and push the contents to subscribers via SSE. After that, you will configure the Kafka connectors to allow the bff service to receive messages from the system services. Finally, you will configure the client in the frontend service to subscribe to these events, consume them, and display them in the UI. To learn more about the reactive Java services that are used in this guide, check out the Creating reactive Java microservices guide. The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside: git clone https://github.com/openliberty/guide-reactive-messaging-sse.git start directory contains the starting project that you will build upon. finish directory contains the finished project that you will build. Before you begin, make sure you have all the necessary prerequisites. Setting up SSE in the bff service In this section, you will create a REST API for SSE in the bff service. When a client makes a request to this endpoint, the initial connection between the client and server is established and the client is subscribed to receive events that are pushed from the server. Later in this guide, the client in the frontend service uses this endpoint to subscribe to the events that are pushed from the Additionally, you will enable the bff service to read messages from the incoming stream and push the contents as events to subscribers via SSE. Navigate to the start directory to begin. Create the BFFResource class. Creating the SSE API endpoint subscribeToSystem() method allows clients to subscribe to events via an HTTP GET request to the /bff/sse/ endpoint. The @Produces(MediaType.SERVER_SENT_EVENTS) annotation sets the Content-Type in the response header to text/event-stream. This content type indicates that client requests that are made to this endpoint are to receive Server-Sent Events. Additionally, the method parameters take in an instance of the SseEventSink class and the Sse class, both of which are injected using the @Context annotation. First, the method checks if the broadcaster instance variables are assigned. If these variables aren’t assigned, the sse variable is obtained from the @Context injection and the broadcaster variable is obtained by using the Sse.newBroadcaster() method. Then, the register() method is called to register the SseEventSink instance to the SseBroadcaster instance to subscribe to events. Reading from the reactive messaging channel getSystemLoadMessage() method receives the message that contains the hostname and the average system load. The @Incoming("systemLoad") annotation indicates that the method retrieves the message by connecting to the systemLoad channel in Kafka, which you configure in the next section. Each time a message is received, the getSystemLoadMessage() method is called, and the hostname and system load contained in that message are broadcasted in an event to all subscribers. Broadcasting events is handled in the broadcastData() method. First, it checks whether the broadcaster value is broadcaster value must include at least one subscriber or there’s no client to send the event to. If the broadcaster value is specified, the OutboundSseEvent interface is created by using the Sse.newEventBuilder() method, where the name of the event, the data it contains, and the mediaType are set. The OutboundSseEvent interface is then broadcasted, or sent to all registered sinks, by invoking the You just set up an endpoint in the bff service that the client in the frontend service can use to subscribe to events. You also enabled the service to read from the reactive messaging channel and broadcast the information as events to subscribers via SSE. Configuring the Kafka connector for the bff service system service is provided for you in the start/system directory. The system service is the producer of the messages that are published to the Kafka messaging system. The periodically published messages contain the system’s hostname and a calculation of the average system load (its CPU usage) for the last minute. Configure the Kafka connector in the bff service to receive the messages from the Create the microprofile-config.properties file. bff service uses an incoming connector to receive messages through the systemLoad channel. The messages are then published by the system service to the system.load topic in the Kafka message broker. The value.deserializer properties define how to deserialize the messages. The group.id property defines a unique name for the consumer group. All of these properties are required by the Apache Kafka Consumer Configs documentation. Configuring the frontend service to subscribe to and consume events In this section, you will configure the client in the frontend service to subscribe to events and display their contents in a table in the UI. The front-end UI is a table where each row contains the hostname and load of one of the three system services. The HTML and styling for the UI is provided for you but you must populate the table with information that is received from the Server-Sent Events. Create the index.js file. Subscribing to SSE initSSE() method is called when the page first loads. This method subscribes the client to the SSE by creating a new instance of the EventSource interface and specifying the http://localhost:9084/bff/sse URL in the parameters. To connect to the server, the EventSource interface makes a GET request to this endpoint with a request header of Because this request comes from localhost:9080 and is made to localhost:9084, it must follow the Cross-Origin Resource Sharing (CORS) specification to avoid being blocked by the browser. To enable CORS for the client, set the withCredentials configuration element to true in the parameters of the EventSource interface. CORS is already enabled for you in the bff service. To learn more about CORS, check out the CORS guide. Consuming the SSE EventSource.addEventListener() method is called to add an event listener. This event listener listens for events with the name of systemLoadHandler() function is set as the handler function, and each time an event is received, this function is called. The Building and running the application To build the application, navigate to the start directory and run the following Maven package goals from the command line: mvn -pl models install Run the following commands to containerize the docker build -t frontend:1.0-SNAPSHOT frontend/. docker build -t bff:1.0-SNAPSHOT bff/. docker build -t system:1.0-SNAPSHOT system/. Next, use the following startContainers.sh script to start the application in Docker containers: This script creates a network for the containers to communicate with each other. It also creates containers for Kafka, Zookeeper, the frontend service, the bff service , and three instances of the The application might take some time to get ready. See the http://localhost:9084/health URL to confirm that the bff microservice is up and running. Once your application is up and running, open your browser and check out your frontend service by going to http://localhost:9080. The latest version of most modern web browsers supports Server-Sent Events. The exception is Internet Explorer, which does not support SSE. When you visit the URL, look for a table similar to the following example: The table contains three rows, one for each of the running system containers. If you can see the loads updating, you know that your bff service is successfully receiving messages and broadcasting them as SSE to the client in the Tearing down the environment Run the following script to stop the application: Great work! You’re done! You developed an application that subscribes to Server-Sent Events by using MicroProfile Reactive Messaging, Open Liberty, and Kafka. Streaming updates to a client using Server-Sent Events by Open Liberty is licensed under CC BY-ND 4.0 Nice work! Where to next? What did you think of this guide? Thank you for your feedback! Thank you for your feedback! Would you like to open an issue in GitHub?Yes What could make this guide better? Raise an issue to share feedback Create a pull request to contribute to this guide
OPCFW_CODE
Basics of using reportlab with python to create pdf files. Sikuli script allows users to automate gui interaction by using screenshots. Sikuli script is built as a jython python for the java platform library. Sikuli is a scripting language that can carry out automated software testing of graphical user. We all know that using selenium we cant automate windows objects. Sikuli is a graphical user interface automation tool. Integrating sikuli with selenium allows us to overcome this. To execute a sikuli script, sikuli ide creates a org. Learn sikulix automation in java and python and create real world applications, learn how to automate any application. Introduction to sikuli gui automation tool sikuli tutorial part 1. Using gui screenshots for search and automation mit csail. Then you can create a batch file, which calls your sikuli script and call this batch file from your python script instead. Table of contents getting started tutorials tutorials. Using gui screenshots for search and automation, uist 2009 pdf. Python and java can be used in sikuli script but not pure. Calling to a sikuli script from python selenium stack. It basically uses image recognition technology to identify and control gui elements. Python tutorial for beginners full course learn python for web development duration. To enhance the scripting capabilities, we can access the entire python language. A good start might be to have a look at the tutorials. Using sikuli automation tool we could automate whatever we see on the screen. Beginning python, advanced python, and python exercises author. Android game development crash course for beginners image. Introduction to sikuli sikuli is an open source gui based automation tool. Manual testing is also less thorough than automation. If you are new to programming, you can still enjoy using sikuli to automate simple repetitive. Usability testing and advanced gui testing with tools. It is used to interact with elements of a web page and handling. Sikuli considers all the elements of a web page as images and recognizes the elements based on their. Sikuli guide for beginners software testing material. Sikuli tutorials automate anything you see on screen using sikuli graphical. Chapter 1 introduction to sikuli scripting language. If you are new to programming, you can still enjoy using sikuli to automate simple repetitive tasks without learning python. Chapter 1 introduction to sikuli free download as powerpoint presentation. Tutorials hello world mac hello world windows goodbye trash mac uncheck all checkboxes. If your sikuli script is completely independent and you just want to run it for once and then have control back to your python script. Sikuli automates anything you see on the screen using the image recognition method to identify gui elements. It uses the technique of image recognition to interact with elements of the web page and windows popups.1159 499 539 194 684 174 277 344 1564 1014 1515 105 1149 199 874 261 1269 1186 208 501 441 1266 760 480 137 823 949 940 343 519
OPCFW_CODE
Data-Driven Testing with Python Hello tests. It’s been a while since I blogged about automated testing, so it’s nice to welcome an old friend back to the fold. I’ve missed you. I’ve recently started programming in python. I say programming, but what I mean by that is ‘hopelessly jabbing at the keyboard and gawping as my lovingly written code explodes on being invoked’. While I blub. Python is a dynamic programming language. It’s pretty dynamic in its ability to detonate in various ways, too. There’s no compiler to sanity check my elementary mistakes like typos, or dotting onto a method that doesn’t exist. Or accidentally comparing functions instead of values. Or… well, you get the gist. Anyzoom, the fundamentals of unit testing in python are very simple. I can’t be bothered detailing how to use unittest or a test runner, as there’s nine zillion resources out there on it already. A Simple Setup We’re using unittest as the framework, Nose as the runner and TeamCity Nose integration. It worked very nicely out of the box, so I can’t complain. Writing tests is simple, but then I wanted to make a data-driven test and… To recap what a data-driven test is: You run the same test logic and assertions, but vary the data. E.g. if you have a test method that is part of a test suite, you’d expected to see something like this (pseudo python code): test data = Vector3D(0,1,0), Vector3D(0,1,0), 1.0 test data = Vector3D(0,1,0), Vector3D(0,-1,0), -1.0 test data = Vector3D(0,1,0), Vector3D(1,0,0), 0.0 def test_dot(vec1, vec2, expected_result): dot = Vector3D.dot(vec1, vec2) assertEquals(dot, expected_result) Each of the test data cases defined above the function / method would generate a new test case. The test logic would be run 3 times – one for each test input. Data-driven tests typically take input data, feed it into some method or function, then assert that the produced effect is correct. You’ll often need to pass through an expected result, too. I had a quick scout around and arrived at two options. There is a third, but it has a limitation that I didn’t much care for. It sounds like a high-tech energy solution based on harnessing snot. It’s not. Nose (the test runner we’re using) has a built-in concept of test function as generators. This allows users to create multiple test cases out of single tests. At first glance, it looks good. Unfortunately, as detailed here, it doesn’t work when you’re subclassing unittest.TestSuite which is a common thing to do. If you don’t care about this limitation and use Nose to run your tests, I actually think this is a nice solution. Next up is a one called simply, “ddt”. No prizes for guessing what that acronym stands for (no, really, there is no prize. Stop phoning. This is a repeat, the lines have closed). To use ddt, decorate your class with @ddt, then add @data to data-driven test methods. Each argument specified as part of the @data decorator generates a test case, so if you have N arguments, you get N test cases. The examples don’t make it crystal clear, but with ddt, you’re expected to boil down your test case to this single argument. I.e. if you have a few inputs and an expected result, you’re responsible for stashing the data into a containing object and yanking it back out in the test method. I’m not a massive fan of this approach as I’m quite accustomed to the NUnit style of the test runner reflecting over my test data attribute arguments then passing the correct arguments to my test method automatically. However, it gets the job done and works well enough. Possibly due to being ill at ease with Python, I made a wrapper object to avoid having to use a dictionary. Personally, I think this makes the test code read better, but this is just a matter of taste. Note: This bit of code may make a pythonista strangle you with your own tie (I don’t wear one). class TestData: def __init__(self, **entries): self.__dict__.update(entries) It’s then just a case of instantiating a new TestData object every time you want to pass several arguments into a test method, like so: @data( TestData(vec1=Vector3D(0,1,0), vec2=Vector3D(0,1,0), expected_result = 1.0), TestData(vec1=Vector3D(0,1,0), vec2=Vector3D(0,-1,0), expected_result = -1.0), TestData(vec1=Vector3D(0,1,0), vec2=Vector3D(1,0,0), expected_result = 0.0) ) def test_dot(self, test_data): vec1 = test_data.vec1 vec2 = test_data.vec2 dot = Vector3D.dot(vec1, vec2) self.failUnlessAlmostEqual(dot, test_data.expected_result, places=1) Updated March 2014: The developers of ddt were nice enough to add an @unpack decorator! You can now provide lists/tuples/dictionaries and, via the magic of @unpack, the list will be split into method arguments. Documentation is here: http://ddt.readthedocs.org/en/latest/example.html The final option I was made aware of (thanks to my old colleague Gary) is one called nose-parameterized. As you can imagine, it works with nose. It allows parameterised tests. What’s not to like? Well, the truth is… This one is almost perfect. Usage (from the documentation): @parameterized([ (2, 2, 4), (2, 3, 8), (1, 9, 1), (0, 9, 0), ]) def test_pow(base, exponent, expected): assert_equal(math.pow(base, exponent), expected) Each tuple’s contents is expanded into the test method’s parameter list, just like momma used to make. Beautiful! … so why am I using ddt instead? Unfortunately, nose-parameterized does not play nicely with PyCharm. PyCharm lets you run the tests from the IDE and also debug them, but nose-parameterised seems to confuse it. Gary said, “what are you doing man, using an IDE with Python?” The truth is that I’m not a real man. Yet. In all honesty, all three options are viable and it’s mostly down to preference. So there we go: Three good options for data-driven testing. If you have any other good ones, please leave a comment.
OPCFW_CODE
Firefox Microsummaries (in Off-topic) Have you all seen this new Firefox feature? It lets you add a piece of code to Firefox that gives you the option of making a one-element RSS feed out of a link. For those of you who don't know what I'm talking about, here is an example. 1) I add the special microsummary code for carnage blender. 2) I bookmark the "active threads" page, except on the bookmark dialog, the "title" of the bookmark has a drop down list box, I click it and it shows all the "live titles" for the web page. 3) When I look at the bookmark in my bookmarks, it shows a piece of text from the actual active threads page instead of it just showing the non-changing title of "active threads" as the title of the bookmark. For those of you who STILL don't understand it, here is what it does, visually. What I'm interested in, is how can CB use this? Can you guys think of a cool way to use this on CB? is walk-through on making them. And here is the mozilla site about them. Cool, I just played with this, and I made a simple little microsummary! It displays the current top clan on CB. This rocks!! Check it out here October 30 2006 4:52 PM EST You are having way too much fun with this stuff :P Hey, I'm trying to find something to do, shush! October 30 2006 4:54 PM EST NOTE: This requires Firefox 2.0 *smile* Gotta love easy-to-use dynamic content... Could you make a generator for current BA? What elements can be gathered into the microsummary (too lazy to read) *smile* A microsummary of how much BA has accrued would rock my pants. October 30 2006 4:58 PM EST *ALERT*: You have accrued over 140 BA! Get your tail to CB quick before time runs out! D'oh, can't get the current BA. Unfortunately the current BA is generated on the "home" page and Firefox microsummaries don't play well with redirects like that. October 30 2006 5:39 PM EST How frequently are those updated? How much extra traffic does this cause? I'm not sure bartjan, none of the dox I've read say anything about the update frequency, I wonder if the XML lets you specify the update time? Hmm, anyways, too bad I can't pull the current BA :( October 30 2006 8:19 PM EST So each user has to manually install the code to add each microsummary to each page? Might be neat, but sounds like WAY too much trouble to me. Can't it be automated on the server side, so that when you bookmark a page, you have the option to use the microsummary? I think you can Maelstrom, I read on the page that you can. But I'm not sure if you can do it by default, I think you might have to "install" them first, but I did read something about sites having a way to set them up so bookmarks automatically do that. October 30 2006 8:48 PM EST Yeah, I saw that. Guess it'll take a little while for sites to get with it ;) This thread is closed to new posts. However, you are welcome to reference it from a new thread; link this with the html <a href="/bboard/q-and-a-fetch-msg.tcl?msg_id=001wGe">Firefox Microsummaries</a>
OPCFW_CODE
Repackage any existing installation into Windows Installer or a virtual format. PACE Suite provides an easy-to-use and powerful repackaging tool. - The most accurate and reliable capturing engine. PACE Suite is capable of correctly capturing any application, simple or complex, x86, x64, or combined. - Repackage on Windows 7/8/8.1/10/2008/2012, both x86 and x64. - Auto-detection of embedded installers. When capturing an application, PACE Suite accurately detects any existing MSI installations hidden inside a legacy setup wrapper and helps you customize those separately. - Include excluded files/registry back to a package. PACE Suite keeps all of the resources that have been excluded and allows you to include them back any time later after the capturing has been done. Full support of Microsoft App-V 5.x technology that enables you to create and edit packages for virtual application. - Manage various package resources like Files, Folders, Registry entries, Services, Shortcuts and File Type Associations (FTAs). - Specify the Virtualization Levels (Merge or Override) for the Folders and Registry keys. - Update Product details such as Application Name, Version, Publisher, Package Name and Description. - Control Streaming Options to optimize an App-V package over slow or unreliable networks. - Use the Target OS dialog to specify the operating systems that can run the created virtual application package. - Select the Primary Virtual Application Directory (PVAD) value for the best result. - Advanced Options allows to enable visibility of the named and COM objects in an App-V package to the local system to improve the usability of some application functions. In addition, it allows enabling full write permissions to the virtual file system for the virtual application. PACE Suite supports creation of VMware ThinApp 5.0-5.2 packages that allows to convert any application, either MSI or non-MSI into the ThinApp virtual format. - Select entry points that will act as shortcuts into the virtual environment and start the virtual application. - Manage primary data container options. - Handle file system isolation modes and Sandbox location to isolate the application from the host system. - Edit build options. - Compress virtual package to decrease its size. - Manipulate contents of an MSI in a convenient tree-like interface - Import any resources - Edit and manage Custom Actions - Integrate scripts into your package - Use a smart and advanced MSI database editor, with formatted string autocompleting, Excel-like formula bar, row reference tracking, and more. - Handle upgrades easily – just let MSI Editor know which MSI’s you want to be upgraded at a runtime. - Undo-redo any manipulation and see the changes highlighted in the MSI tables - Save changes to MSI database as Patch package - Create Patch package against original MSI database Repackage or convert to virtual formats a number of applications at once in a semi-automatic or automatic mode with Application Packaging Self-Service (APS) – the automation solution included in PACE Suite Enterprise. APS connects to your hypervisor server and provides a web dashboard to track and manage all packaging, conversion, and testing tasks. Some of the features: - Automation of repetitive tasks - Reliable batch conversion, based on real installing of each package of a virtual machine - Semi-automatic repackaging mode – all you have to do is click through the installation routine of a source application. - Full automatic mode – APS tries to install a source application with its default settings. Publish your packages from PACE Suite to Microsoft SCCM 2007 or 2012 and get them ready for deployment with a button click. Both Configuration Manager’s “package model” and the new “application model” are supported. PACE Suite allows efficient usage by a team of packaging engineers. Shared configuration and work Set up PACE Suite to use shared templates, exclusion lists, settings, and client profiles within your team to ensure quality and consistency. Generate package documentation containing the details about your package and configuration made with one click. Use your template and configure which data and how should be presented in a report All of the components of PACE Suite are portable applications and can be launched on any physical or virtual machine, locally or from a network share. Thus, you can bring your packaging tool to your client’s office even on a USB stick! Define your validation rules using a built-in markup language and ensure all of your specific requirements are met in every package. Standard and custom validation Validate packages against Microsoft’s Internal Consistency Checks (ICE) and any custom CUB files.
OPCFW_CODE
IRC – ‘Internet Relay Chat’ – is one of the oldest forms of communication on the Internet, predating the World Wide Web by several years. In its simplest form it allows two or more users to see the same scrolling ‘window’ with a prompt box into which they can simultaneously type text messages. Pressing Enter sends the latest message to the bottom of the window. Computers which host these chats are called ‘IRC servers’, and where a choice of different forums are offered to participate in, these are called ‘channels’. Figure 25: XChat IRC Despite its age, IRC remains a popular means of interacting between groups. Both Mint and its ‘parent’ program Ubuntu provide chat channels for their users to discuss problems and share discoveries. There are many other hosts providing access to users around the world, including some that specialise in ‘adult’ chat; so be aware that what you find on IRC may not always be safe for work. Connecting to a host As mentioned above, Pidgin supports IRC as well as many other communications networks. XChat works in a similar way but with some more powerful features relating to IRC alone. Selecting XChat IRC from the menu will start the program and – if you have an internet connection – connect you up to the Linux Mint server, which is likely to be very busy at any time of the day. To sign out, click on the Server menu and Disconnect. You can Reconnect in the same way. You’ll be given a nickname which will be the same as the <username> with which you set up Mint. If someone else on the server already has the same nickname, you’ll be prompted to change it. After you connect to a host, you can bring up a list of that host’s channels with Server/List of Channels. The list will show the name of the channel, the number of people currently connected, and a comment that may or may not tell you something about what you’re likely to encounter there. Once the list comes up, double-click on a channel to connect to it. You can be connected to several different channels on the same host at once. The channels you’re currently in will appear as buttons on the bottom left of the screen. Channel names normally turn red when there’s a new post, but by right-clicking on the channel name, you can choose some other kind of alert. Currently popular channels on the Mint server include #pimpmymint, #linuxmint-chat, #linuxmint-debian and #linuxmint-help. The channel list can be searched by name or filtered on the number of current users per channel. Channels with five users or fewer are unlikely to respond to newcomers, so look for a fairly busy channel to start in. Communicating with users When you’re logged on and using an IRC channel, you should see a list of nicknames at the right of the screen. These are the current users of that channel. If the list isn’t visible, check the View menu or press Ctrl-F7. Users at the top who have a green bullet icon next to their names are moderators with control over the channel. They can kick you out temporarily or permanently if you’re offensive or silly, so be nice to them! Ordinary members like you appear underneath, in alphabetical order. Right-clicking on a user name brings up a short menu of options. These include Open Dialog Window, which requests the user to join you in a separate window for a private chat, and Send File, which will send a file directly to them if they choose to accept it. You can also ask for user information, using a protocol called ‘Whois’. This will appear in the dialog window as something like the following: * [jon] (jon@SpotChat-jptbi8k6.bigpond.net.au): Jon * [jon] is connecting from jon@CPE-121-216-164-246.lnse2.ken.bigpond.net.au 220.127.116.11 * [jon] +#linuxmint-chat +#linuxmint-help * [jon] harpy.de.SpotChat.org :is it a Bird? * [jon] is using modes +ix * [jon] idle 00:19:59, signon: Sun Mar 3 15:53:52 * [jon] End of WHOIS list. Note that you can ask for user information about yourself as well as any other user. Moderators can use the Username list to kick or ban users, or assign other users moderator privileges. To see a list of IRC channels, go to XChat/Server List. A drop-down list will appear showing a hundred or so of the most popular global IRC servers, Double-click on one of these to connect, and once the connection goes through you’ll be able to bring up a channel list for that server. Note that you can’t be connected to two servers at a time. To join a server which is not on the list, click on ‘Add’ then ‘Edit’, and provide the server details. You can include a shortlist of channels that you will be connected to automatically when logging on to that server. Some servers will allow you to set up your own channel. The easiest way to do this is to type ‘/join #channelname’ in the prompt line at the bottom of the screen and press Enter. You will be the moderator of the channel and the only member to begin with, but your channel will now appear in the channel list, and others can choose to join it if they want to. Uses of IRC Although there are now many alternatives to IRC, it is still very popular, and a good way to provide support and help to new users, particularly in areas where communications bandwidth is low. If you think it might be of use to you then start with the Mint help channels first, and experiment in those until you’re familiar with the etiquette of online chat and what to expect from the other participants. Other specialised hosts and channels can be discovered via web searching or just asking around. Generally the people you meet on IRC will be kind and helpful and the moderators will do their best to get you up and running. Unfortunately there’s no way to keep out thugs and hooligans completely, and abusive attacks will occur now and then. If that happens while you’re in a channel, just log off or find somewhere else to go for a while until things calm down.
OPCFW_CODE
Resolving tsconfig paths breaks inside node_modules Bug description I put my project inside node_modules directory (not even real node_modules alongside some package.json, just a new directory named that). After doing so, resolving tsconfig paths stopped working. I expect tsx to be agnostic to where the project is located (as long as it has its own package.json and tsconfig.json), same as esbuild is. Reproduction I repeated this on Stackblitz: https://stackblitz.com/edit/node-qjlxvn This does not completely resemble the description above (it uses a 'real' node_modules directory) but the very same code can be used to repeat the problem on a completely empty node_modules. Run: ~/projects/node-qjlxvn ❯ cd test ~/projects/node-qjlxvn/test ❯ tsx index.ts value ~/projects/node-qjlxvn/test ❯ cd .. ~/projects/node-qjlxvn ❯ mv test node_modules/ ~/projects/node-qjlxvn ❯ cd node_modules/test ~/projects/node-qjlxvn/node_modules/test ❯ tsx index.ts Error: Cannot find module '@/value' Require stack: - /home/projects/node-qjlxvn/node_modules/test/index.ts at Module._resolveFilename (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:6:217308) at d.default._resolveFilename (file:///home/projects/node-qjlxvn/node_modules/.pnpm/@esbuild-kit+cjs-loader@2.4.1/node_modules/@esbuild-kit/cjs-loader/dist/index.js:1:1554) at Module._load (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:6:214847) at Module.require (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:6:218087) at i (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:6:415284) at _0xc8b09a (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:15:142803) at eval (file:///home/projects/node-qjlxvn/node_modules/test/index.ts:2:774) at Object.eval (file:///home/projects/node-qjlxvn/node_modules/test/index.ts:3:3) at Object.function (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:15:143540) at Module._compile (https://nodeqjlxvn-s25i.w-credentialless.staticblitz.com/blitz.b6c96f782a49b3e017dca41830943768f8acbe40.js:6:219079) { code: 'MODULE_NOT_FOUND', requireStack: [ '/home/projects/node-qjlxvn/node_modules/test/index.ts' ] } ~/projects/node-qjlxvn/node_modules/test ❯ esbuild --bundle index.ts (() => { // src/value.ts var value_default = "value"; // index.ts console.log(value_default); })(); Environment System: OS: Linux 5.0 undefined CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz Memory: 0 Bytes / 0 Bytes Shell: 1.0 - /bin/jsh Binaries: Node: 16.14.2 - /usr/local/bin/node Yarn: 1.22.19 - /usr/local/bin/yarn npm: 7.17.0 - /usr/local/bin/npm npmPackages: tsx: ^3.12.1 => 3.12.1 Can you contribute a fix? [ ] I’m interested in opening a pull request for this issue. Of course this is arguably an edge case scenario, what I am really struggling with is that I can't add a new test dependency under tests/fixtures/tsconfig/node_modules/my-new-test-dependency with its tsconfig.json (for #96) - it ignores its tsconfig.json altogether even if run directly (not as a dependency). I then narrowed the problem to having node_modules in path. I have this strange behavior: project/node_modules/dependency breaks tsx in dependency project/node_modules/<symlink to dependency> doesn't break it, unless the dependency is located within a node_modules directory. This is caused explicitly by these two lines in cjs-loader and esm-loader. If you disable those lines, tsx works as expected, at least for my use case. Doesn't sound related to the issue being reported here. Please file a separate issue. No, it's the exact same issue. Any file that has node_modules in its real path will ignore tsconfig, because esm-loader and cjs-loader will refuse to apply tsconfig.json to it. Elaborating on how your code is breaking (what code? what error?) would add more value to this thread. Right now, it's hard for readers to understand the problem you're experiencing. Does removing those two lines fix it for the reproduction provided? If you feel strongly that it's the same issue, that's fine. But if this gets fixed and doesn't resolve your issue, you'll have to open a new issue. Always better to file earlier so it's on my radar & roadmap. The difference in behavior when the parent path contains the string /node_modules/ definitely seems to be caused by those two lines, so solving this will solve my issue. First, my usecase - I'm writing a website generator which is supposed to installed with npm and run with npx, and to import files from the main package directory. This now breaks my paths configuration, if my generator package is properly installed (i.e. not symlinked), because the files that the start script loads are now within a node_modules directory, so tsx refuses to use my tsconfig.json to resolve paths, even if I run the script or tsx itself manually from that directory. My current development work around is mv node_modules mode_nodules; ln -s mode_nodules node_modules. This "works", i.e. the start script runs tsx which correctly uses paths from the specified tsconfig to resolve modules. The drawback is that all modules from all packages are now loaded using the same logic specified in the originally used tsconfig. This hasn't broken anything in my project so far, but it has made the load time a tiny bit longer. (Interestingly, it also gave me a warning which found a bug in jsonpath. I must report it after I get some sleep.) I can see two ways of fixing this: Always use the "local" tsconfig (i.e. one that's not above the closest package.json). Use the originally specified tsconfig.json for files within the package where tsx was started (i.e. walk up from cwd to the nearest package.json, any file that's within that directory uses tsconfig paths for resolving modules). Any reason for publishing a non-compiled version to npm that needs a real-time bundler to run? That's very untypical. I believe in your case it makes sense to use e.g. tsup and publish cjs and mjs versions which will start natively. The website generator imports jsx files which need to be transpiled before import, so using tsx at runtime is already a part of the project. I realize that there must be a way to accomplish that without loading the whole thing through tsx every time it's started. But using tsx as a drop-in replacement for node would make it much more comfortable, and the expense is really just compiling several additional files at startup. I'm encountered the same issue. In the above screenshot, the two entry files have exactly the same content, but only the one outside node_modules can be loaded with paths support. Encountered the same issue now. Removing this line helped solve it: https://github.com/privatenumber/tsx/blob/985bbb8cff1f750ad02e299874e542b6f63495ef/src/esm/loaders.ts#L155 I'd either remove it completely - I don't see it being a big performance bottleneck as the resolution is executed only once per import; or at least make it a flag when running tsx, something like --allow-tsconfig-paths-in-node_modules. @privatenumber your input would be grately appreciated here. If you're still struggling on understanding how this exactly happens, I'd be happy to create you a repro. This change introduces a significant risk. Compilation should be scoped to the package scope, but removing thenode_modules check extends the project’s compilation settings beyond the intended scope, which also makes it non-compliant with the TypeScript compiler specification. In the next major release of tsx, I plan to improve monorepo support by reading the tsconfig file in any symlinked dependencies. This approach will enable multiple projects to share paths by extending from a base tsconfig, providing a more reliable solution. I don't recall making that claim before. It's also worth mentioning that esbuild-register is not exactly a baseline for TypeScript spec compliance.
GITHUB_ARCHIVE
I was browsing here, and came across Ulfnic’s Terminal Tuesday thread. And while I don’t understand a lot of it, it got me thinking. I have a headless server, but it rips my CDs and DVDs for use with Jellyfin. I had been using the commandline, but couldn’t get the rip I wanted. Turns out, if you run ssh -x you can then run programs like picard or handbrake-gtk FROM your server but ON your local machine! This helped me try several different audio/video options to get Handbrake right, and let me use Picard to quickly relabel/rename a few CDs that all got uploaded as Title1.mp3 instead of the right name when I ripped it. I do have X11 Forwarding yes in /etc/ssh/sshd_config and looking at this post it seems like that is all you need. If I run sudo apt-get install xorg it does find a lot of X packages and dependencies to install. So based on that and the previous command, it seems like it can somehow pass X11 instructions onto my laptop since it’s running X. I’ve started using this technique in the last couple days. It’s brilliant. From my laptop in my kitchen I can view all the virtual machines that I have on my workstation in the basement. I’m using a Core2 Duo machine and it was getting crushed by running a VNC and a Zoom call at the same time. SSH -X seems to use practically no CPU so a Zoom call on top should be easily manageable. It has the added bonus of managing windows as if they were completely native which is less confusing and more responsive than a VNC virtual desktop. What’s going on here is that an Xorg desktop is a client-server situation. The X server gives you the graphics mode, and input methods (keyboard, mouse). The programs we run are X clients. In order to display a window, the client connects to the server, and tells the server how to draw the window using a communication protocol. This can be done over unix sockets, or a network protocol. X Windows has always worked this way. The server and client have never needed to be on the same computer, as you’ve seen. SSH simply makes a tunnel and makes the remote computer think that a display server is running there. When you run a program, it is running on the remote system, using the resources of that system, and telling the X server how to draw its windows, just like always. This is why resource utilization is low on your local machine. It’s just getting a string of instructions on how to draw windows from the client on the remote machine, and sending inputs (keyboard and mouse) to the client. Doesn’t take a lot of local processing power to do that. You can even do a full-screen desktop session this way (using XNest and XDMCP). No VNC required, although, VNC is a faster option than XDMCP and X11 forwarding. You say that it manages as if they were completely native. Well, they are completely native.
OPCFW_CODE
Accessibility in wine msclrhd at googlemail.com Tue Oct 12 09:55:45 CDT 2010 On 12 October 2010 14:21, Seth Shelnutt <shelnutt2 at gmail.com> wrote: > I can't seem to find much info on this. What is wine's support for MSAA or > the newer UIAutomation? I see that there is the oleacc dll which is the main > dll for MSAA. According to the API site, is seems there are still 10 stubs. > Looking at the source code there hasn't been any updates in over a year. We > don't have the UIAutomation dll yet. Does anyone know the status of > MSAA/UIAutomation support in wine? > I'm trying to get NVDA working and still running into some issues with it > requiring UIAutomation.dll, when according to them, that dll is optional. > After I get that worked out I'm hoping to see what needs to be done to get > MSAA supported under wine. The current wine implementation supports GetRoleText and the like on oleacc.dll, but does not support IAccessible (MSAA) or UIAutomation functionality on the Win32 objects (windows, list boxes, text edit controls, buttons, etc.). UIAutomation consists of 4 parts: 1/ a COM API to the accessibility functionality 2/ UIAutomation support for all Win32 controls 3/ an MSAA <=> UIAutomation bridge (for applications that use MSAA) 4/ a .NET binding to the UIAutomation objects To get MSAA working in wine, you will need to (minimally): 1/ complete the IAccessible core support -- LresultToObject and friends 2/ add a WM_GETOBJECT handler to all the Win32 controls and expose IAccessible COM objects (with tests!) 3/ implement/flesh out the WinEvents API to get the IAccessible objects for the events Given the move to UIAutomation, it would be easier in terms of 1/ expose UIAutomation objects via the WM_GETOBJECT message 2/ use the MSAA <=> UIAutomation bridge to support MSAA A question here is whether the wine MSAA/WinEvents/UIAutomation implementation should bind to the Gnome/KDE accessibility APIs (GAIL/???). The benefit here is that the Windows applications will integrate into the native Linux a11y technologies. The problems here are (a) getting Alexandre's buy in to use the GAIL APIs if present and (b) whether these are going to change in the move over to Gnome3. In terms of applications to test, Firefox supports at least MSAA bindings as do Qt applications. Others will to, to varying degrees (some relying on the native Win32 support), but FF and Qt implement their own IAccessible bindings. In terms of testing this via applications (in addition to the regression tests that will need to be provided), there are various assistive technologies that make use of MSAA/UIAutomation -- JAWS, Dragon Naturally Speaking, etc. I am not familiar with NVDA. More information about the wine-devel
OPCFW_CODE
Nice Problems to Have Nice Problems to Have I've been thinking about "nice problems to have" as a phenomenon recently. I have a feeling they're widely misunderstood. My standard response to the concept has often been "Yes but nice problems to have are still problems", but actually I think it goes deeper than that: Nice problems to have are also rarely actually nice to have. The phrase is misleading. What we generally mean when we say "nice problem to have" is that the preconditions for having this problem are nice to have. i.e. you cannot have this problem unless some other nice thing happens first. This is not at all a reliable predictor of the amount of suffering involved. Consider, for example, having all your friends and family constantly making demands on you, far beyond your capacity to satisfy, to the point where it would be emotionally exhausting even if you could say yes to everyone but also you have to say no to most of them and they will resent you for it. This is obviously an intensely painful problem, right? Now suppose it's because you come from poverty and have landed a well paid job. Suddenly it's a "nice problem to have" (you have it because you're being well paid! Being well paid is good!), but that doesn't change how miserable it is. The thing is, by definition, "nice problems to have" are ones that are not going to be shared by a lot of people (otherwise the preconditions would just be normal behaviour and it wouldn't be a nice problem to have). As a result: - Most people probably won't be able to relate to it, because their lived experiences will be so different. - People will be envious of you for the nice thing. Thus the defining characteristic about nice problems to have is not that they are nice, but that you will be afforded no sympathy for having them. Nice problems to have are the opposite of nice: They isolate you from the ability to complain about them, which removes a major bonding activity with people who have not been successful in the same way as you, and also causes you to feel worse about the problems with those who do not share your burden. There is a phenomenon of people succeeding and leaving their former friends behind, and there are many very bad reasons why people do this, but I suspect the nice problems to have issue is one of them: None of their former friends are able to be remotely sympathetic to things that are very real sources of their suffering (and indeed that unrelatability of it is probably part of their suffering). This is in many ways a variant of the abstraction stack problem - you need people around you who just get the things you want to talk about - and the solution is probably the same: Regardless of who else you hang out with (and you should hang out with a variety of people) you do need some people you can talk to who understand your problems, even if they're supposedly nice problems to have.
OPCFW_CODE
Thanks J E (-and Roy too!) Ok, so I've talked to the gun smith who is both building the rifle and is making the dies. I just ordered 4 more 6.5mm die blanks and they are being shipped up to the smith (who lives near me) overnight for delivery tomorrow. We are getting together Saturday morning to both make the intermediate step dies plus anneal the cases. 2 questions: 1 - he just got in the Ken Light automated case annealer, but although it will give us the most consistent case neck and shoulder temps, he's concerned that with a case this short/fat, the automated rotary case annealer might allow too much transfer of heat down to the base. Anyone used an approach other than the water-in-the-pan method on pretty short fat cases and should we be concerned? Obviously with 200 - 300 cases to do, this will go MUCH faster - if it works ok... 2 - I've heard there are 2 possible designs we can go for on the form dies; the first is with the first die to severely reduce the existing shoulder angle while leaving the distance between the current base to the current neck/shoulder junction unchanged but moving the distance between the base and the body/shoulder junction to the target length of 1.298"; then with each successive die, start to re-steepen that shoulder angle, essentially making the neck longer and longer as we decrease the distance between the base and the shoulder/neck junction point. The second option is to instead leave the neck angle alone, and essentially with the first die move the body/shoulder junction back but only for to that 1.298" point by shoving it back, but only to a case diameter of about .06: to .08" with each pass, for 3 to 4 passes - so in other words, half way through the 4 passes with the first approach we would have a case with a very long, very gentle sloping shoulder, while with the other apporach, half way through the 4 passes we'd have a case with have a case that looks like it has 2 shoulders (until we completed all the passes of course.) Make sense? So my question is, which of the 2 approaches is better - create a very, very shallow sloping shoulder and gradually steepen it, or push the shoulder back, keeping it 35 degrees, but take 4 passes to move back the entire shoulder? I've has one person each, both very reputable, recommend each method. So any input from anyone whose done this (ESPECIALLY with these doggone thick/apparently harder than average) WSM cases (I'm using Winschester cases in face) would be very appreciated. We only have 4 blank dies to do this with and are running very short on time, so it's critical we get it right the first time on this Saturday morning. Thanks everyone for your time!!!
OPCFW_CODE
Is my stainless steel kitchen sink really chemically dangerous? My stainless steel kitchen sink arrived with the P65 warning for Nickel and Chromium. Will these chemicals leach into the water over time? Are they present on the surface? Can they be absorbed in my hands as I use the sink? Simply wondering if there is a safer sink available without the P65 warning. Thanks! Tongue in cheek: Your stainless steel sink is only dangerous if you live in California. Consider moving, and take the sink with you. California passed proposition 65 which makes manufacturers liable for failure to include such notice, regardless of the level of low risk substances, so manufacturers put the notice on everything. As this NY Times article suggests, the notice has become a noisy alarm. https://www.nytimes.com/wirecutter/blog/what-is-prop-65/ Chromium-free stainless steel is as common as unicorns producing rainbow ice-cream. Chromium, nickel and many other elements are common alloying elements to alter properties of the steel. They do "leave" the material, but under such conditions like welding, exposure to strong oxidizers, high-temperature exposures and such. Your sink is absolutely safe to be used as usual - you don't wash your plates in aqua regia (HCl+HNO3), do you? Stainless steel contains chromium as a key element for its stainlessness. When the oxygen reacts on the surface it reacts with chromium first and form chromium oxide layer, which is chemically stable and dense enough to prevent further oxygen penetration into the steel. Using chromated goods is similarly safe and the protection works a similar way with the difference that the chromium atoms do not need to diffuse to the surface. The big difference here is the production. When alloying the steel with chromium the pure metal is added to the alloy prior to casting. The process is dangerous and the danger ends there. Chromating, on the other hand, is done usually by electrolysis of highly toxic chromium salts. If properly cleaned, the product is safe. The production line is not. You can put P65 warning on almost anything, because there is 99% chance it contains carbon and nitrogen, and those elements form a cyanide group, which is substantial part of very strong poisons... Robert Johnson's answer correctly outlines that P65 has caused manufacturers to use the warning legally defensively when the science doesn't actually warrant it. Since none of the answers here explain the situation with chromium toxicity, I thought I'd supplement given how often one comes across chromium and stainless steel in construction. Every element has a different reactivity depending on its charge and participation in a broader molecular structure the later being measurable through bioavailability. From the WP on chromium toxicity: Chromium toxicity refers to any poisonous toxic effect in an organism or cell that results from exposure to specific forms of chromium—especially hexavalent chromium.1 Hexavalent chromium and its compounds are toxic when inhaled or ingested. Trivalent chromium is a trace mineral that is essential to human nutrition. So chromium is actually nutritious in one form and toxic in another, so the details count. Another important detail is how much. Almonds famously contain cyanide which is highly toxic, but not enough to be dangerous. Even water has a median lethal dose. Many tools are electroplated with chrome, so the concentration of chrome on a crescent wrench puts the chrome in SS to shame. You need not worry, in the least. (At least about the chromium or nickel in the sink coming out to get you) But just for completeness sake: The youtube channel 'Applied Science' did a very neat experiment on the amount of lead leached out of lead 'crystal' glasses - result: non-zero, but nothing to loose sleep about. In that case the leaching occurs from the upper nanometers of the material. For stainless steel, as long as you are not conducting (pun) electrochemistry in your sink, the same applies: Given that your sensitivity is high enough, you will find these elements present in the water, but not in quantities that need concern. I would be more concerned about the water coming out of the tap than the P65 on the sink. Water deemed safe is often very borderline, or even toxic. My tap water not only tastes like a public swimming pool, but leaves black water marks, and was changing from blue tinted to yellowish last year... I don't know what's in it, but I filter it before use!
STACK_EXCHANGE
The Interpretation of Textual Forms |ToExpression[input]||create an expression by interpreting strings or boxes| In any Wolfram System session, the Wolfram System is always effectively using ToExpression to interpret the textual form of your input as an actual expression to evaluate. If you use the notebook front end for the Wolfram System, then the interpretation only takes place when the contents of a cell are sent to the kernel, say for evaluation. This means that within a notebook there is no need for the textual forms you set up to correspond to meaningful Wolfram System expressions; this is only necessary if you want to send these forms to the kernel. |FullForm||explicit functional notation| Built into the Wolfram System is a collection of standard rules for use by ToExpression in converting textual forms to expressions. These rules define the grammar of the Wolfram System. They state, for example, that x+y should be interpreted as Plus[x,y], and that Null should be interpreted as Power[x,y]. If the input you give is in FullForm, then the rules for interpretation are very straightforward: every expression consists just of a head followed by a sequence of elements enclosed in brackets. The rules for InputForm are slightly more sophisticated: they allow operators such as +, =, and ->, and understand the meaning of expressions where these operators appear between operands. StandardForm involves still more sophisticated rules, which allow operators and operands to be arranged not just in a one‐dimensional sequence, but in a full two‐dimensional structure. The Wolfram System is set up so that FullForm, InputForm, and StandardForm form a strict hierarchy: anything you can enter in FullForm will also work in InputForm, and anything you can enter in InputForm will also work in StandardForm. If you use a notebook front end for the Wolfram System, then you will typically want to use all the features of StandardForm. If you use a text‐based interface, however, then you will typically be able to use only features of InputForm. If you copy a StandardForm expression whose interpretation can be determined without evaluation, then the expression will be pasted into external applications as InputForm. Otherwise, the text is copied in a linear form that precisely represents the two-dimensional structure using ∖!∖(…∖). When you paste this linear form back into a Wolfram System notebook, it will automatically "snap" into two‐dimensional form. |ToExpression[input,form]||attempt to create an expression assuming that input is given in the specified textual form| StandardForm and its subsets FullForm and InputForm provide precise ways to represent any Wolfram System expression in textual form. And given such a textual form, it is always possible to convert it unambiguously to the expression it represents. TraditionalForm is an example of a textual form intended primarily for output. It is possible to take any Wolfram System expression and display it in TraditionalForm. But TraditionalForm does not have the precision of StandardForm, and as a result there is in general no unambiguous way to go back from a TraditionalForm representation and get the expression it represents. When TraditionalForm output is generated as the result of a computation, the actual collection of boxes that represent the output typically contains special Interpretation objects or other specially tagged forms that specify how an expression can be reconstructed from the TraditionalForm output. The same is true of TraditionalForm that is obtained by explicit conversion from StandardForm. But if you edit TraditionalForm extensively, or enter it from scratch, then the Wolfram System will have to try to interpret it without the benefit of any additional embedded information.
OPCFW_CODE
Even though the cryptocurrency bitcoin has been around over one decade (yes it has been this long) there are still so many myths around it and hardly anyone knows how blockchain technology and cryptocurrencies work. This is especially true for quite a number of anti-financial crime and compliance organizations. There are yet only a few experts that understand blockchain technology and cryptocurrencies from an anti-financial crime and compliance perspective. In this Essential Guide to Blockchain Technology, you will learn the nuts and bolts of blockchain technology that you can leverage for cryptocurrency financial crime compliance. Table of Contents - What Is A Blockchain? - How Does A Blockchain Work? - Understanding The Blockchain Consensus Mechanisms - The Proof Of Work Mechanism - The Proof Of Stake Mechanism - Other Mechanisms - What Can You Do With Blockchain Technology? What Is A Blockchain? In the simplest terms, a Blockchain is a diary that is almost impossible to forge. In more advanced terms, the blockchain can be thought of as a distributed database. By these means, Blockchain is a particular type or subset of the so-called distributed ledger technology or DLT. DLT is a way of recording and sharing data across multiple data stores. All of these distributed and individual data stores together make up the database. In practice, blockchain is a technology with many faces. It can exhibit different features and covers a wide array of systems that range from being fully open and permissionless to being permissioned: - On an open, permissionless blockchain, a person can join or leave the network at will, without having to be approved by any entity. All that is needed to join the network and add transactions to the ledger is a computer on which the relevant software has been installed. There is no central owner of the network and software, and identical copies of the ledger are distributed to all the nodes in the network. The vast majority of cryptocurrencies currently in circulation is based on permissionless blockchains. This includes cryptocurrencies such as Bitcoin, Cash, Litecoin, and others. - Secondly, there is the permissioned blockchain. On a permissioned blockchain, transaction validators – which are the nodes – have to be pre-selected by a network administrator. The network administrator sets the rules for the ledger to be able to join the network. This allows to easily verify the identity of the network participants. However, at the same time it also requires network participants to put trust in a central coordinating entity to select reliable network nodes. In general, permissioned blockchains can be further divided into two subcategories: - On the one hand, there are open or public permissioned blockchains, which can be accessed and viewed by anyone, but where only authorised network participants can generate transactions and update the state of the ledger. - On the other hand, there are closed or enterprise permissioned blockchains, where access is restricted and where only the network administrator can generate transactions and update the state of the ledger. What is important to note is that just like on an open permissionless blockchain, transactions on an open permissioned blockchain can be validated and executed without the intermediation of a trusted third-party. Some cryptocurrencies, like Ripple and NEO utilise public permissioned blockchains. How Does A Blockchain Work? Essentially, the Blockchain can be thought of as a distributed database. Additions to this database are initiated by one of the members, which are the network nodes. These nodes usually exist in the form of computers. Each node maintains a copy of the entire Blockchain. The nodes also create new blocks of data, which can contain all sorts of information. Among other information, the block contains a hash. A hash is a string of numbers and letters and each new block generates a hash. The hash does not only depend on the block itself, but also on the previous block’s hash. This is one of the reasons why the order of the blocks matters and why blocks are added to the Blockchain in the order that they occurred. Even a small change in a block creates a completely new hash. After its creation, a new block is broadcasted to every party in the network in an encrypted form so that the transaction details are protected. The nodes of the network check the validity of each new block that is added. Once a block reaches a certain number of approved transactions then a new block is formed. The determination of the block’s validity happens in accordance with a pre-defined algorithmic validation method. This is commonly referred to as a “consensus mechanism”. The nodes check the hash of a block to make sure a block has not been changed. Once validated, the new “block” is added to the blockchain. As soon as the nodes have approved the new Block, the Blockchain or ledger is updated with it, and it can no longer be changed or removed. It is therefore considered to be impossible to forge it. You can only add new entries to it and the registry is updated on all computers on the network at the same time. The blocks are also signed with a digital signature using a private key. Every user on a blockchain network has a set of two keys: Firstly, A private key, which is used to create a digital signature for a block, And secondly, A Public key, which is known to everyone on the network. A public key has two uses. On the one hand, it serves as an address on the blockchain network. On another hand, it is used to verify a digital signature and validate the identity of the sender. A user’s public and private keys are kept in a digital wallet or e-wallet. Such wallet can be stored or saved online and offline. Online storage is often referred to as hot storage and offline storage is commonly referred to as cold storage. Understanding The Blockchain Consensus Mechanisms In principle, any node within a blockchain network can propose the addition of new information to the blockchain. In order to validate whether this addition of information is legitimate, the nodes have to reach some form of agreement. Here a “consensus mechanism” comes into play. A consensus mechanism is a predefined specific, cryptographic validation method that ensures a correct sequencing of transactions on the blockchain. In the case of cryptocurrencies, such sequencing is required to address the issue of double-spending. Double-spending is when the same payment instrument or asset can be transferred more than once and would happen if transfers were not registered or controlled. A consensus mechanism can be structured in a number of ways. In the context of cryptocurrencies there are two predominant consensus mechanisms, which are the Proof of Work mechanism and the Proof of Stake mechanism. The Proof Of Work Mechanism In this kind of system, network participants have to solve so-called “cryptographic puzzles” to be allowed to add new “blocks” to the blockchain. This puzzle-solving process is commonly referred to as “mining”. In simple terms, these cryptographic puzzles are made up out of all information previously recorded on the blockchain and a new set of transactions to be added to the next “block”. The input of each puzzle becomes larger over time, resulting in a more complex calculation. The PoW mechanism therefore requires a vast amount of computing resources, which consume a significant amount of electricity. If a network participant solves a cryptographic puzzle, it proves that he has completed the work, and is rewarded with digital form of value – or in the case of a cryptocurrency, with a newly mined coin. This reward serves as an incentive to uphold the network. The cryptocurrency Bitcoin is based on a PoW consensus mechanism. Other examples include Litecoin, Bitcoin Cash, Monero, and others. The Proof Of Stake Mechanism In a this kind of system, a node as a transaction validator must prove ownership of a certain asset in order to participate in the validation of transactions. In the case of cryptocurrencies, this would require a certain amount of coins. This act of validating transactions is called “forging” instead of “mining”. For example, in the case of cryptocurrencies, a transaction validator will have to prove his “stake” of all coins in existence to be allowed to validate a transaction. Depending on how many coins he holds, he will have a higher chance of being the one to validate the next block. This has to do with the assumption that he may has greater seniority within the network earning him a more trusted position. The transaction validator is paid a transaction fee for his validation services by the transacting parties. Cryptocurrencies such as Neo and Ada utilize a PoS consensus mechanism. The PoW and PoS mechanisms are far from the only consensus mechanisms currently in existence. Other examples include proof of service, proof of elapsed time and proof of capacity. In fact, many other consensus mechanisms are probably developed in this very second all over the world. Eventually, they will emerge and become part of a new cryptocurrency. What Can You Do With Blockchain Technology? Blockchain can theoretically be applied in a large variety of sector. This includes trade and commerce, healthcare, governance, and many other. In addition, it has numerous potential applications. It could have an impact on the pledging of collateral, on the registration of shares, bonds and other assets, on the transfer of property tiles, on the operation of land registers, etc. One of the key advantages of blockchain technology is that it allows to simplify the execution of a wide array of transactions that would normally require the intermediation of a third party, such as a custodian, a bank, a securities settlement system, broker-dealers, a trade repository, or other third parties. In essence, blockchain is all about decentralizing trust and enabling decentralized authentication of transactions. Simply put, it allows to cut out the “middleman”. In many cases this will likely lead to efficiency gains. However, it is important to underscore that it may also expose interacting parties to certain risks that were previously managed by these intermediaries. For instance, the Bank for International Settlements recently warned that the adoption of blockchain technology could introduce new liquidity risks. In general, it seems that when an intermediary has additional functions other than transaction execution alone, the intermediary cannot simply be replaced by blockchain technology. Especially when the intermediary functions as a buffer against important risks, such as systemic risk, he may not be replaced by Blockchain technology – Or at least not yet. Now you know everything you need to know about the Blockchain, key concept, terms and definitions, working principles, and application areas.
OPCFW_CODE