Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Aristotle and the art of software development by Jonathan Dahl How do you identify a good programmer? Jon says Ethics. Ethics is about how you live your entire life. He thinks the what makes a good software developer and what makes a good person have parallels in their answers. Kant – Only act on principles that you would like to become universal law. Kant would have loved Haskel. There are principles in software, but sometimes they conflict such as DRY vs write understandable code. DRYing something up can make it hard to read. John Stuart Mill Utilitarianism – what matters is the effect of the action. The ends justify the means. However, It’s hard to know whether the effects will be good ahead of time. The Pragmatic Programmer is a good outcome of Utilitarianism. But Utilitarianism may lead to sloppy code and processes. Or the Cowboy coder. Aristotle - ethics as virtue. The person is the important part. For Aristotle: ethics == a life well lived == happiness == virtue each virtue is between two extremes: rashness > courage > cowardliness Aristotle says that you become good by doing good. Which is kinda of circular. To become a good programmer, hang around the good and look at their code. Jon thinks that Aristotle would like Ruby because Ruby is written for us as programmers. Haskel is based on math and Java is for business requirements but Ruby is for people. Also, to harken back to Matz’s point from yesterday, Ruby is between two extremes: Lisp > Ruby > BASIC Fear of Programming by Nathaniel Talbott A significant part of what we do as programmers is art. And we can learn from artists who aren’t afraid to talk about their feelings. At this point - Fear is an emotion that effects our productivity. - Fear of a blank page and a blinking cursor. - Fear of existing code: - Legacy code that’s hard to understand and work with. - Good code can inspire fear as you might mess it up. Nathaniel then walked around the room, Oprah style, and had people volunteer things they are afraid of: - Fear of not finishing. - Fear of putting it into the wild. - Fear of loss of excitement about coding. - Fear that what you write will be useless. - Fear that someone has written it better before you and you just haven’t found it. - Fear that my imagination outreaches my ability. Fear is good as a warning mechanism, but pathetic as decision making tool. You can manage fear by learning. Much fear comes down to a fear of the unknown. Testing is an antidote to fear and it’s a great way to break a problem down. “The War of Art” and “Art and Fear” are two books Nathaniel recommends on this topic. How do you, the audience, deal with fear: - Pair programming. - Talking it through to someone else. - Tackle the worst thing first. - Write fears down on paper. - Taking a walk. - Writing only the test cases. He ended with the idea that passion and love are the ultimate antidotes to fear.
OPCFW_CODE
IOT Security Studies A typical IoT Penetration Test includes the following steps: 1. Determination of IoT Service Scope 2. Information Gathering 3. Vulnerability Assessment 4. Exploitation Phase - In order to ensure that security tests are carried out in the most effective way, all your needs are listened to and information is exchanged. Thus, the scope, type and required information are determined at the first safety meeting. - At this meeting, it is decided whether physical security tests will be needed on IoT devices and which components will be included in the test. In this step, attack vectors on IoT devices are first determined. The basic attack vectors on an IoT device are as follows: - Implementation of Hardware Attacks - Firmware Reviews (Reverse Engineering, etc.) - Implementation of Network Attacks - Implementation of Wireless Network Attacks - Mobile and Web Applications - Penetration Tests - Cloud Services Penetration Tests This process starts with vulnerability assessment, firmware and application analysis. The following steps are used in firmware analysis: - Binary Analysis: - Reverse engineering, - Document analysis in the system (for finding sensitive information or certificates), - Performing all necessary application tests according to the type of application during application analysis. - Researching Communication Protocols: - Determination of communication protocols (BLE, Zigbee, LoRA, 6LoWPAN) - Sniffing, modifying and replaying communication protocols (relay-replay attacks), - Jam-based attacks, - Third party services (mobile application API services, etc.) that then communicate with the IoT devices specified in the information collection step. - Physical Security Tests: - External USB Access, - External ports access, - Location and storage environment, - Availability of debug console access - Availability of serial console access - Allowed connection methods (wireless, wired, Bluetooth, etc.) - Test controls. - This phase aims to exploit the vulnerabilities collected in the information gathering and vulnerability assessment sections. In this way, the party receiving the penetration test service can see the possible damage after a real cyber attack. In addition, the risks are evaluated for the vulnerabilities found. Similar vulnerabilities may have different levels of criticality based on ease of exploitation, access to information required to exploit, and the like. - Cyber security experts use the necessary attack techniques to show what a malicious attacker can do in this step, without damaging the systems as much as possible. - All detected vulnerabilities and findings are reported. The report is prepared in a simple language, understandable by the developers, in a standard supported by screenshots and presented to the parties. - The report consists of sections that include the purpose and scope of the test, the general testing methodology, the security tests performed, and finally the evaluation and summary information for administrators. The widespread use of these systems causes security vulnerabilities that can have dramatic effects. Cyber Security Institute conducts security research on IoT systems, monitors current vulnerabilities and performs hardware and software penetration tests. Provides detailed technical reports and executive summaries as a result of penetration tests. It contributes to raising the awareness of institutions about security and eliminating possible security vulnerabilities. - In Software Planning and Pre-Development Phase: - Helping for designing a secure architecture, - Recommending best practices for developers to follow, - Integrating continuous IoT security testing into the DevOps cycle. - During Development: - Iteratively evaluating the product with its security requirements, - Constantly reviewing secure code, - Incorporating a security perspective as part of an automated process. - Post Development: - Performing penetration tests for all major releases, - Managing the security program and interacting with external developers, - Patch management and recommending security updates. SGE conducts penetration tests and security audits for both public institutions/organizations and private sector companies. Penetration tests and security audits cover all components in the IT infrastructure. After the tests are completed, detailed technical reports and executive summaries are produced. In addition to technical security tests, social engineering tests are also carried out to increase the security awareness of the employees. New exploitation methods and tools are being researched and developed by SGE researchers to perform more efficient and high-standard testing. One of the main goal in this area is information sharing. In addition to the security tests carried out in both the public and private sectors, studies are carried out to increase the quality of the tests carried out within this scope in the sector. Workshops are organized to determine the scope and depth of tests, to increase the quality and objectivity of test result reports, and joint projects are carried out with regulatory agencies. Security is not a feature that can be added to software and the system after installation. It should be considered as part of the development process. Implementing security functions in the development and deployment processes is both easier and more effective. - Secure software development trainings, - Software source code analysis to detect vulnerabilities as a result of mistakes made while developing software, - Risk analysis and threat modeling to make secure software development processes more effective, - Researching and implementing new secure software development methods, - Conducting secure software development workshops and conferences. SGE provides information security risk analysis services for military, public and private sector organizations. Risk analysis projects can be done on software and system basis. Risk analysis services are also provided within the scope of ISO 27001 certification on a corporate basis. In this context, the business processes of the institution are analyzed and critical business processes are determined; assets in these business processes and dependencies between assets are removed and asset valuation is carried out. Afterwards, the probability and impact values for the risks that affect these assets are determined and the risk values for the asset or process are calculated. Risks are documented in detail in accordance with the content of the project. In accordance with the threats, the measures are issued according to the requirements defined in ISO 27001 and NIST SP 800-53 standards, the maturity levels of the measures are determined together with the customer and they are documented in accordance with the project content. Finally, after the implementation of the measures, a risk study is carried out and the remaining risk is evaluated.
OPCFW_CODE
community: improve retrieval speed of BM25Retriever PR Message Overview Significantly improved search speed of BM25Retriever (approximately 50 times faster for 100k documents retrieval). Initialization time increases by 2-3 times, but this is a one-time cost. Slight differences in search results may occur, but these are negligible for practical use. No API changes, so existing code can be used as-is. The scipy library is now required as an additional dependency, which can be installed via pip. If this modification aligns with the principles of langchain, I would be grateful if it could be merged. Cause In rank_bm25 used by BM25Retriever, BM25 scores are calculated using dictionary operations for each query, which may be causing the slow speed. Solution and Implementation TFIDFRetriever is fast because it pre-computes weight matrices for the corpus and uses matrix operations to calculate similarities during searches. I implemented a similar approach for BM25Retriever. Specifically, I made the following changes: Extended rank_bm25.BM25Okapi to add a method for converting sentences into weight vectors Scipy library (installable via pip) is additionally used for efficient storage and computation of sparse matrices. In BM25Retriever.from_texts, created BM25 weight vectors for each document in the corpus and saved them as properties In BM25Retriever._get_relevant_documents, calculated BM25 scores as the dot product of the query word frequency vector and the corpus document weight vectors. Evaluation When evaluated on a corpus of 100,000 documents, initialization became 2-3 times slower, but searching became about 50 times faster. Since initialization can be completed before the user starts, search speed is more likely to affect user experience. Therefore, this improvement is considered useful for many use cases. As the search process was changed, the BM25 scores no longer completely match those from before the change, but the difference was very small. Specifically, while the average value of the top 100 BM25 scores was around 30, the absolute difference in BM25 scores between the conventional method and the improved method was on the order of 10 to the minus 16th power. There was also a slight impact on search results, but it seems to be at a negligible level for practical use. Specifically, when comparing the top 100 search results of BM25Retriever.invoke for 100 queries between the conventional and improved methods, 94 queries had completely matching top 100 search results. Of the remaining 6 queries, one had a difference in the 6th place result, and the other 5 all had differences in the 50th place result. Therefore, for applications like RAG that prioritize top search results, this difference appears to have minimal impact. The source code used for evaluation, detailed results, and analysis are available for viewing at the following repository: https://github.com/jiroshimaya/bm25-retriever-eval/tree/main Checklist [x] PR title: "package: description" [x] PR message PR message was written considering guidelines. [x] Add tests and docs: If you're adding a new integration, please include no tests and docs added [x] Lint and test twitter handle: @shimajiroxyz I am closing this PR on my own accord. In this PR, the score calculation is implemented on the langchain side for speed improvement. However, after some time and reconsideration, I realized that implementing it independently without using the standard API of rankbm25 could lead to unnecessarily high maintenance costs, which is not desirable. If the maintainers are interested in this PR, feel free to reopen it.
GITHUB_ARCHIVE
I just purchased a Maxtor 500gb ATA/100 internal HD to replace my WD 120gb ATA that is starting to make noise, fyi both are PATA not SATA. I am running Windows 2000 pro SP4, and edited the registry to enable BIGLBA. Installation worked fine, I removed the extra 20gb that was the slave HD, I installed the new HD (jumper set to cable select position), BIOS identified everything correctly, 120gb primary master, 500gb primary slave. I I installed MaxBlaster 5 from the cd, restarted, ran the program. I allowed it to partition the new drive, I was able to open it in Explorer no problem. Then I started the clone process. Everything looked good, old source HD (WD 120gb C was selected, and then the destination (MX 500gb F: or G: can't remember) they were the only two HDs shown and I double checked that I selected properly. I clicked next for it to reboot, started the normal restart procedure, and after it displays to the windows progress bar, it goes to a blue screen with a stop error, telling me to remove any newly installed hardware or software. I reset and get the same result. I disconnected the new 500gb drive leaving my original 120gb master drive and tried to start the computer, and the bios does not recognize it as a HD for some reason (uh oh! ) it goes to a black screen asking for a boot disk. I installed the new 500gb by itself as the primary, and it is recognized in bios, then goes to a black screen with a NTLDR error, so worst case I should still be able to install Windows to the new disk right? But my problem is that I'm in graphic design, and have irreplaceable files on my 120gb drive, and considering it didn't even have enough time to delete files and that my custom windows start screen loads (which has to use a .bmp located on the drive to display) the data should still be on the disk correct? I just don't know much about recovery options, and unless I can get this resolved for free I will be very disappointed in my Seagate purchase, I don't have the money to pay someone to recover my files, it just seemed like such a simple process I don't know what went wrong. I'm pretty upset right now, really hoping someone can help me out, even if you're not 100% on the answer, any suggestions would be great. I'll be checking this from another computer in my house. You'll be disappointed in Seagate? You were the one who didn't backup your files before you attempted a risky procedure, which a drive swap certainly is. Sorry man, this one's on you. I'd try first of all to connect the new drive as primary master, make sure it's connected to the end connector of the cable (which is master) and not to the middle (which is slave). Try the jumper on cable select and if that doesn't work try it on master. If that fails, take your Windows install CD and reboot to the CD and try a Windows repair. Maybe that will get the cloning completed. If that fails, put the old 120 GB drive back as primary master and remove the new drive and try the Windows install CD thing again to repair Windows. Good luck, and make and keep a backup once you finish all this. Haha, hell yes I'd lay the blame on Seagate. They make the software they include with their product, it should work, or at least not fubar the user's OS so they can't even log into their comp. I followed all directions exactly with no warning signs of anything wrong and the software did not work properly, I can't imagine how someone who wasn't knowledgeable about computers would feel if this happened to them (although they probably wouldn't install stuff on their own). Whatever, I got the computer working and copied the files over on my own, I just know I'm not the only one to experience this problem after reading through this forum. I don't have the cash to throw down on an external right now, so I had to do this in one shot. Thanks for nothing MaxBlast you POS program..
OPCFW_CODE
Client Server Relationship The key characteristic of client/server systems is that the client sends a request to a server, and the server responds by carrying out a function, such as sending information back to the client. The term server refers to a host running a software application that provides information or services to other hosts connected to the network. A well-known example of an application is a web server There are millions of servers connected to the Internet, providing services such as web sites, email, financial transactions, music downloads, etc. A factor that is crucial to enabling these complex interactions to function is that they all use agreed standards and protocols. To request and view a web page, a person uses a device that is running web client software. A client is the name given to a computer application that someone uses to access information held on a server. A web browser is a good example of a client. Role of Protocol in Client Server Communication A web server and a web client use specific protocols and standards in the process of exchanging information to ensure that the messages are received and understood. Types of protocol Hypertext Transfer Protocol (HTTP) governs the way that a web server and a web client interact. HTTP defines the format of the requests and responses exchanged between the client and server. HTTP relies on other protocols to govern how the messages are transported between client and server Transmission Control Protocol (TCP) is the transport protocol that manages the individual conversations between web servers and web clients. TCP formats the HTTP messages into segments to be sent to the destination host. It also provides flow control and acknowledgement of packets exchanged between hosts. The most common internetwork protocol is Internet Protocol (IP). IP is responsible for taking the formatted segments from TCP, assigning the logical addressing, and encapsulating them into packets for routing to the destination host. Network Access Protocols Ethernet is the most commonly used protocol for local networks. Network access protocols perform two primary functions, data link management and physical network transmissions. Data link management protocols take the packets from IP and encapsulate them into the appropriate frame format for the local network. These protocols assign the physical addresses to the frames and prepare them to be transmitted over the network. The standards and protocols for the physical media govern how the bits are represented on the media, how the signals are sent over the media, and how they are interpreted by the receiving hosts. TCP and UDP Transport Protocol Each service available over the network has its own application protocols that are implemented in the server and client software. In addition to the application protocols, all of the common Internet services use Internet Protocol (IP), to address and route messages between source and destination hosts. Transmission Control Protocol When an application requires acknowledgment that a message is delivered, it uses TCP. This is similar to sending a registered letter through the postal system, where the recipient must sign for the letter to acknowledge its receipt TCP breaks up a message into small pieces known as segments. The segments are numbered in sequence and passed to IP process for assembly into packets FTP and HTTP are examples of applications that use TCP to ensure delivery of data. User Datagram Protocol UDP is a ‘best effort’ delivery system that does not require acknowledgment of receipt. This is similar to sending a standard letter through the postal system. It is not guaranteed that the letter is received, but the chances are good. UDP is preferable with applications such as streaming audio, video and voice over IP (VoIP). An example of an application that uses UDP is Internet radio. TCP/IP Port Number An example of an application that uses UDP is Internet radio.>When a message is delivered using either TCP or UDP, the protocols and services requested are identified by a port number. A port is a numeric identifier within each segment that is used to keep track of specific conversations and destination services requested The client places a destination port number in the segment to tell the destination server what service is being requested. When a client specifies Port 80 in the destination port, the server that receives the message knows that web services are being requested. A server can offer more than one service simultaneously. A server can offer web services on Port 80 at the same time that it offers FTP connection establishment on Port 21. The source port number is randomly generated by the sending device to identify a conversation between two devices. This allows multiple conversations to occur simultaneously. multiple devices can request HTTP service from a web server at the s ame time. The separate conversations are tracked based on the source ports. . The source and destination ports are placed within the segment. The segments are t hen encapsulated within an IP packet. The IP packet contains the IP address of the source and destination
OPCFW_CODE
We have collected for you the most relevant information on Arm Precise Data Bus Error, as well as possible solutions to this problem. Take a look at the links provided and find the solution that works. Other people have encountered Arm Precise Data Bus Error before you, so use the ready-made solutions. Cause of hard fault from precise data bus error? Prodigy 40 points Casey Anderson Replies: 5. Views: 9174. Hello, I am working with an LX4FS1A on an eval board. I'm running a command line application through the USB port using the CDC Device Class. The program runs fine until I issue a specific command that should enable the PECI0 peripheral. Compared to precise bus errors, imprecise errors are much trickier to debug and especially so without a deep understanding of arm processors and assembly language. Imprecise and precise flags are found in the BusFault status register, a byte in the CFSR (Configurable Fault Status Register). I was wondering if anyone knows of a way to force the Cortex-M7 CPU to take a precise exception when a bus fault occurs. I'm writing an application that requires the bus fault handler to know the exact address of the instruction that generated the bus fault so it can take remedial action. When you get a precise fault (as indicated here), the NVIC actually stores the bad address in a register for you to look at during debug. Looking in hw_memmap.h, I see that 0x40027000 is the base address for GPIO port H and, looking in hw_gpio.h, offset 0x400 from there is one of the GPIO data registers. Synchronous bus faults are also described as a precise bus faults. They refer to an exception that takes place immediately after the bus transfer is carried out. A synchronous BusFault can escalate into lockup if it occurs inside an NMI or HardFault handler. Cache maintenance operations can also trigger a … Documentation – Arm Developer Nov 24, 2020 · To make it easier to identify exactly which type of HardFault your application has encountered, there is a View > Fault exception viewer window available in recent versions of IAR Embedded Workbench for Arm. In legacy versions there is a debugger macro file available, located in the installation directory:: arm\config\debugger\ARM\vector_catch.mac In computing, a bus error is a fault raised by hardware, notifying an operating system (OS) that a process is trying to access memory that the CPU cannot physically address: an invalid address for the address bus, hence the name. Arm Precise Data Bus Error Fixes & Solutions We are confident that the above descriptions of Arm Precise Data Bus Error and how to fix it will be useful to you. If you have another solution to Arm Precise Data Bus Error or some notes on the existing ways to solve it, then please drop us an email.
OPCFW_CODE
So after some 22,000 downloads (thank you) in the 8 months since first released, HVRemote has undergone a refresh to make it even easier to configure Hyper-V Remote Management and diagnose issues. The major change in version 0.7 is the ability to perform some verification of the configuration and provide hints as to what to follow if it detects an error through the use of the new /target option. Below is an example of the new bit of output running hvremote /show /target:servername from a Windows 7 client where everything is working just fine (yes, it is fine for the ping to timeout, that just means it’s blocked by the firewall). HVRemote 0.7 fully supports Windows Server 2008 R2, Microsoft Hyper-V 2008 R2 and Windows 7. In addition, the home page has undergone a refresh to cover the some of the most commonly asked questions, and the documentation has been brought up to date. For a list of other changes, please see the documentation. Thanks, John. Valuable additions to HVRemote, and much appreciated. Essential with Hyper-V core R2 and Windows Server 2008 SP2 even with both computers in the same domain. Thanks John; saved me a lot of 'aggro' Any idea why I would be getting an undeterminated entity reference error? C:\>cscript hvremote.wsf /add:david Microsoft (R) Windows Script Host Version 5.7 Copyright (C) Microsoft Corporation. All rights reserved. C:\hvremote.wsf(5872, 37) Windows Script Host: Unterminated entity reference - matching ';' not found David - what version of HVRemote are you running, and where did you download it from? The reason I ask is that v0.7 (only!) has 5679 lines in the file, so I can only assume you have a corrupt download of some sort. Can you try downloading it again from code.msdn.microsoft.com/hvremote. How do i install this on my server core? its colo and I dont have access to it. Just have Remote desktop client working. Thanks in advance Frank - you could use copy/paste through an RDP session. Without explorer on core, you could probably paste the text into a notepad session. However, I would strongly recommend that you do not deploy in this manner (assuming you mean you intend managing the Hyper-V machine directly over the internet) without something such as a TS gateway or similar deployed in front of it ensuring that the system is secure. Opening the WMI management interfaces directly to the Internet is a significant attack vector. Why isn't it recommended to use HVRemote in a VMM managed environment? What are the technical difficulties that might occur? I'm planning a Hyper-V deployment with HVRemote and still have to decide if I integrate the hosts in VMM or not. Thars - VMM manage their own authorization and delegation model and push policy from their servers to the Hyper-V machines. Hence any changes made by HVRemote would be lost. Similarly, changes that HVRemote have the possibility of making the server inaccessible by VMM. hi John, I am not able to locate version 1.x, please advise url. regards, Malcolm
OPCFW_CODE
The Article Versioning feature allows knowledge contributors to create multiple versions of a knowledge article. They can create a new version of an article from an existing published version. This existing version can be either the latest published version or an older outdated version. All changes are stored in the new version of the article and the information in the existing article remains the same. Here you will find some Frequently Asked Questions related to Versioning and tips to get started. How do I enable versioning? This feature is installed with the Knowledge Management Advanced Installer plugin: - Go to System -> Plugins and search for Knowledge Management Advanced Installer - Click Activate - For more information, navigate to the Product Documentation topic: Activate the Knowledge Management Advanced plugin Note: V2 knowledge bases need to be migrated to V3 before using versioning, to determine if you have v3 knowledge bases and to migrate them to v3, follow these steps: - Query the kb_knowledge_base table, filtering for records where Release version - is - 2, as shown in the following screenshot: - Refer to our documentation topic Knowledge Management v3 Migration if you have existing v2 knowledge bases. The Checkout button is not visible on articles even though the valid_to date has not passed, why? The Checkout button is enabled only in the following cases: - Article Versioning is enabled (the Knowledge Management Advanced Installer plugin is activated) - The property glide.knowman.versioning.enabled is set to true - The article is in Published state How do you edit the latest version of an article? The latest version of the article can be edited in the following ways: - Check out the article and make the changes in the new minor version of the article - Some fields of the article can be edited without creating a new version based on the fields configured for the glide.knowman.versioning.enable_minor_edits property (more information about the properties in Article versioning properties) What does the script KBVersioningSNC do? This is a Script Include that contains the article versioning related service method. You can create a custom definition for the required methods by overriding the method definition in the KBVersioning Script Include How do you get a link to the latest version of the article regardless of which version the article is in? Accessing the article based on the sysparm_view parameter renders the latest version of the article accessible to the user: - Knowledge V3: /kb_view.do?sysparm_article=KBXXXXXXXX - KM Service Portal: /kb?id=kb_article_view&sysparm_article=KBXXXXXXX Does the Permalink point to a draft or the published version of the article? The article's permalink points to the latest version of the article "accessible" to the user - For users with contribute access, the permalink redirects to the draft version of the article, if available. - For users with only read access, the permalink redirects to the published version of the article. Does kb_use show the total number of views for all versions of the article? Why are the numbers from kb_use different from View Count on the kb_knowledge table? Yes, the kb_use table shows the total number of views of all versions of the article. But the view count is governed by the property glide.knowman.view_age.days, which indicates the number of days up to which the article views should be considered for the view count, the default is set to 30, meaning only views in the last 30 days will be counted in the view count and hence you will find the discrepancy between the numbers in kb_use and view count, see the documentation topic Knowledge properties for more information. Who gets notified when a new version of the article is created? Users who are subscribed to the article or the parent knowledge base will receive a notification when a new version is published.
OPCFW_CODE
Is there a way to edit a UDF disk image file/ISO image? I have an ISO image with a UDF filesystem and a boot sector and I need to add a file to it. When I do sudo mount -o loop,rw /tmp/file.iso /tmp/dir I get mount: block device /tmp/file.iso is write-protected, mounting read-only This happens even if I remove loop or add unhide. The file has permissions rw-rw-rw-. I have tried various UDF command-line tools, but they all demand an actual CD drive, and won't even work with the loopback device. So is there anything I can do? Because this has a boot sector, I'd rather edit the ISO file directly than unpack/repack. Using Kubuntu 14.04 here. Thanks. I'm able to browse, add and delete files in an ISO file without unpacking/repacking it by simply opening it with the Archive Manager (Ubuntu 14.04). Hopefully you can do the same using Kubuntu. Archive Manager is file-roller, right? Or has that changed? Yes that's it, Archive Manager is its new name (like Files for Nautilus...). It does not seem to be reading the UDF section of the disk image. All I see is a single folder . with nothing inside. This is both with the package and with a file-roller master build via jhbuild. For UDF images I only see a README.TXT saying "This disc contains a "UDF" file system and requires an operating system that supports the ISO-13346 "UDF" file system specification.". What you've done is somewhat correct, but you must login as a root user. sudo su - Create a mount point: mkdir -p /mnt/<mount_name> The use mount command as follows to mount ISO file .iso: mount -o loop disk1.iso /mnt/<mount_name> Change directory to list files stored inside an ISO image: cd /mnt/<mount_name> ls -l I did use sudo; the output of my original question was with sudo. Edited. See changes in my answer, try that, and please let me know. Still complains about write-protect. @Mitch Your mount -o loop command says : mount: block device /tmp/toto.iso is write-protected, mounting read-only @SebMa Just to clarify, you did use sudo right? @Mitch I tried both methods : with sudo su - then mount -o loop and I tried with sudo mount -o loop. Both methods said : mount: block device /tmp/toto.iso is write-protected, mounting read-only Try to install smbfs. sudo apt install smbfs, and then try that. This answer is plain wrong and weird. Iso9660 ain't really made to be editable. And the suggestion to install smbfs makes it even weirder. Smbfs is a network file system from Microsoft, and doesn't even make sense for physical disks.
STACK_EXCHANGE
When is a Deadlock not a Deadlock? I'm asking this question because I'm getting a deadlock from time to time that I don't understand. This is the scenario: Stored Procedure that updates table A: UPDATE A SET A.Column = @SomeValue WHERE A.ID = @ID Stored Procedure that inserts into a temp table #temp: INSERT INTO #temp (Column1,Column2) SELECT B.Column1, A.Column2 FROM B INNER JOIN A ON A.ID = B.ID WHERE B.Code IN ('Something','SomethingElse') I see that there could possibly be a lock wait but I fail to see how a deadlock would occur, am I missing something obvious? EDIT: The SPs that I typed here are obviously simplified versions but I'm using the columns involved. The structure of both tables would be: CREATE TABLE A (ID IDENTITY CONSTRAINT PRIMARY KEY, Column VARCHAR (100)) CREATE TABLE B (ID IDENTITY CONSTRAINT PRIMARY KEY, Code VARCHAR (100)) update waiting for insert into to finish and insert into waiting for update to finish? Both waiting for the other to finish! If both Update and Insert were against the same table, unless the Insert statement tries to set an exclusive lock on ALL the tables involved and not only on the #temp one http://www.codinghorror.com/blog/2008/08/deadlocked.html Is A.ID indexed? In fact what are the complete table structures involved and deadlock graph? @Michael Fredrickson: Great read, same problem that I'm having and the solution is the one that I'm implementing, unfortunately the problem is not explained, I'm reading through all the comments to see if someone has insight on the cause. Thanks :) Read the whole thread and found no conscensus, even the people that were angered for such a "dumb question on deadlocking" did not provide an answer, just distructive critisism. I'm going to appli the (NOLOCK) hint to the select SP and that should solve the problem. Thanks to all that read and commented :) I'd say toss a Profiler trace on it and capture the deadlock graph that comes out. Though you'll need a tall beer afterwards from reading it, that'll show you exactly what objects & locks forced the deadlock. For good measure, you can also trap lock escalations. Try this since its causeing locks specify for the tables name the table hint and keyword: WITH(NOLOCK) So some thing like this for your scenario: INSERT INTO #temp (Column1,Column2) SELECT B.Column1, A.Column2 FROM B WITH(NOLCOK) INNER JOIN A WITH(NOLOCK) ON A.ID = B.ID WHERE B.Code IN ('Something','SomethingElse') See how you go then. You can lookup table hint also for tsql, sql server to see which one suits you best. The one I specified NOLCOK will not cause locks and also it will skip locked rows as some other process is using them, so if you dont care you can use it. I am not sure with temp tables but you can also use table hints with INSERT, INSERT INTO WITH(TABLE_HINT). Hi Pasha, I did end up using the (NOLOCK) option which solved the problem (Please se comments on the question, there is a link to an excelent article that I used to decide to use the (NOLOCK) hint). I'm marking this as the answer since it is actually what I did and nobody else posted it as an "answer". Thanks :)
STACK_EXCHANGE
Moving files can be a tedious job for some projects and there are many different ways to move files. - You can take the easy way – FTP or sFTP (secure FTP) the files down to your local computer, then upload them to the new location. - You can create an archive of the files you want to move on the source server, then SSH into your target server and do a wget command. e.g. wget http://www.some-server-somewhere.com/[filename]. Then you would un-pack those files into the destination directory or put them in a working directory and move them from there. Often we find ourselves needing to bulk change all file permissions or all directory permission for an entire site or directory. To do this with your individual ftp client is time consuming and some of the ftp software clients try to recursively change everything to the same file permission, including directories. Yes, I am talking to you WinSCP. To separate these out you just need to log into your server or hosting account via SSH, navigate to the directory you want to start the change in and type: One of the things that a website or server may have to do in its lifetime is scale up to a cluster of servers. What’s a cluster? A cluster can mean several things, but at a bare minimum it means “more than one server”. Yes, I realize that is not all that helpful to understanding all of this just yet, but stay with me. Typical cluster configurations: One web server – One database server: With this set up we do not need to employ a load balancer that sends traffic requests to one or the other serve.… Here is how you can use Window’s built-in FTP client through My Network Places (or Netowrks in Vista) How to FTP Files using Windows – Steps: In XP – go to “My network places” In VISTA – “Go to “Networks” Select “Add new network” Select “Other network place” (usually the last option, some computer “load” this section with ISP offers, etc – by default it is just ‘MSN’) Type in the following: Password: [whatever your ftp pass is] Username: [whatever you username is] Now you can drag and drop files into your ftp space “almost” like your desktop.… We get this a lot when talking to customers, namely they bring up an example of a web hosting outfit who is offering a cheaper rate or more bandwidth for a lower price. Example. We have a client who pushes about 620 gigs of data a month on one of our dedicated servers. They stream a large amount of videos (no, it is not what you think). Now the way we work, being a specialty web host and all – is we do not employ limiting software for our clients as far as how much of the “pipe” they can use.… Securing your servers & applications is always at the forefront of any “good” development group’s conscience. If it is not, then heck, you are amateurs and your company deserves to whither and die because this is not a business where the”Fisher Price – My First Web Company” type of stuff cuts it. This applies to the following people or companies: - Web Freelancers who deploy open source or use community-grown contributions and freeware code for their clients. - Companies & Developers who deploy or base customer-applications or tools off of open source or other frameworks.
OPCFW_CODE
import React from 'react'; import { expect } from 'chai'; import sinon from 'sinon-sandbox'; export default function describeLast({ Wrap, }) { describe('.isEmpty()', () => { let warningStub; let fooNode; let missingNode; beforeEach(() => { warningStub = sinon.stub(console, 'warn'); const wrapper = Wrap(<div className="foo" />); fooNode = wrapper.find('.foo'); missingNode = wrapper.find('.missing'); }); afterEach(() => { warningStub.restore(); }); it('displays a deprecation warning', () => { fooNode.isEmpty(); expect(warningStub.calledWith('Enzyme::Deprecated method isEmpty() called, use exists() instead.')).to.equal(true); }); it('calls exists() instead', () => { const existsSpy = sinon.spy(); fooNode.exists = existsSpy; expect(fooNode.isEmpty()).to.equal(true); expect(existsSpy).to.have.property('called', true); }); it('returns true if wrapper is empty', () => { expect(fooNode.isEmpty()).to.equal(false); expect(missingNode.isEmpty()).to.equal(true); }); }); }
STACK_EDU
Raspberry pi slot machine software Chocolate Slot Machine: We decided to celebrate a Casablanca themed Halloween this year (just in time for the 75th anniversary). We had all sorts of ideas about table games, chocolate poker chips, etc. cluster - Raspberry Pi for Machine Learning - Raspberry … I’m a software engineer interested (and doing research) in machine learning. For some of my machine learning models, I need more computational power than which offered by a single computer. Presentation Machine » Raspberry Pi Geek The Raspberry Pi has an HDMI output that will connect to most modern slide projectors of big-screen TVs. I'll also use a webcam – in this case, a Logitech C310The software described in this article runs on nearly any Raspberry Pi Linux. I used Raspbian because I like the simple, uncluttered... Raspberry Pi Emulator: The Ultimate Retro Gaming Machine A Raspberry Pi Emulator can provide you with hundreds of hours of fun.2. Follow the instructions to install the formatting software. 3. Insert your SD card into the computer or laptop’s SD card reader and check the drive letter allocated to it, e.g. G:/ (If you don’t have one then you can buy a USB SD Card... Tutorial : Raspberry Pi How to set up your Raspberry Pi 3 Model B+ - TechRepublic I'll be taking you through how to set up your Raspberry Pi B+ using a Windows machine, ... an open source software library for machine ... a slot at the ... How to set up your Raspberry Pi 3 Model A+ - TechRepublic I'll be taking you through how to set up your Raspberry Pi A+ using a Windows machine, ... see a slot at the ... browse all of the software that's ... AdoPiSoft | How to make WiFi vending machine using Raspberry Pi Car Project : DIN Slot Starter Guide ... Here is a Starter Guide For Your Raspberry Pi Car Project in DIN Slot. Many Technical Matters Around Car Need To Be Known For Complex Project. ... Raspberry Pi Car Project : DIN Slot Starter Guide - The Customize Windows. Raspberry Pi Car PC System : List of Needed Hardwares - The Customize Windows ... The ultimate grilling machine See more ... Tutorial : Raspberry Pi •Raspberry Pi is a credit card sized bargain micro Linux machine. •The goal behind creating Raspberry Pi was to create a low cost device that would improve programming skills and hardware ... •NOOBS (New Out Of Box Software) is an easy way to install RPi distributions. Raspberry Pi Desktop (for PC and Mac). Debian with Raspberry Pi Desktop is the Foundation’s operating system for PC and Mac. You can create a live disc, run it in a virtual machine, or even install it on your computer. How to turn your Raspberry Pi into the ultimate retro gaming machine ... Dec 14, 2016 ... You'll need a Raspberry Pi to get started. ... It's the software that will power the Rasperry Pi and all of your ... On the Raspberry Pi 3, the model I'm using for this guide, the microSD card slot is on ... Then power up the machine. How to Build a Raspberry Pi Retro Game Console - Lifehacker Feb 9, 2017 ... Since its release, the $35 Raspberry Pi mini-computer has been hailed as ... games, you'll also get access to a full version of the media center software, .... Select+Left Shoulder: Load; Select+Right: Input State Slot Increase ... Luck has nothing to do with it! - BrianChristner.io Jan 8, 2015 ... The Casino Floor was my home for the best part of 10 years. I lived in Las Vegas and worked for a Slot Machine manufacture as a Casino ... - Cashing check from an online casino - Hello casino no deposit bonus code 2019 - Poker deal left or right - No deposit casino bonuses codes nz - Port perry casino new years eve - Slot plus casino no deposit bonus codes - Free play online casino slots real win - Wild pixies slot machine app how to download - Cod advanced warfare extra armory slots - Comment blanchir son argent au casino - Best time to play online slots - Up the ante poker room - Closest casino to pampa tx - Micro sd slot galaxy note 3 - Dragon tales round and round - Pawn shop near winstar casino - Crown casino perth id requirements - Lucky 7 casino online rtg - Crown casino swimming pool perth - Www free slot games online - Most money lost on roulette - 21 black jack pelicula critica - Ghost rider online free
OPCFW_CODE
Separate names with a comma. Discussion in Software PC & Mac started by pristine • Sep 24, 2013. I may have to check this SRWare Iron out. Do you have a source for it? I've been using Mozilla Firefox for as long as i can remember. The browser is fast and the settings are easy to manage. Also there's a whole bunch of options for customization. The browser is secure as well and there's a wide range of add-ons to choose from. I sometimes use Google Chrome if i have issues with Firefox which doesn't happen often. Before my MacBook Pro packed up, I used Safari and was VERY happy with it.Like someone else said it never freezes, works very smoothly, is very fast and I can honestly say I never saw any shady pop up windows. I never knew just how happy till it packed up and I had to revert back to Explorer. Wow, I gave up after trying to use it after a day and swapped over to Firefox, which was slightly better but not much cop. I found it extremely slow and very temperamental. Now I'm on Chrome and I'm really enjoying it. It's not as good as Safari but it's definitely way better than Explorer and Firefox for me. I usually use Firefox as it is really easy to use and very comfortable. I have been using Firefox since a long time. I prefer to use chrome also sometimes, but only very rarely. I think it is best to use the combination of chrome and Firefox. When I'm browsing and want absolutely no distractions, I use Kmeleon. I know most of you have never heard of it but it's a relatively good browser especially for low end PCs. When I have lots of things to do online like curate content, watch videos when I'm in the mood and so on, I use Firefox. It's probably not the best browser but it's got some nifty add-ons/extensions which makes me use it [FF] whether I like it or not I only use Google Chrome. I used to only use Internet Explorer because it was more convenient - it already came installed with the Windows. However when I got my latest computer and it came with Windows 8, for some reason Internet Explorer seemed to start giving me trouble. It would stop responding all the time, close by itself for no apparent reason and would be really slow and glitchy. I downloaded Chrome and have not gone back since. I don't even know how IE is doing right now... maybe it has improved? I also never used Firefox and as Chrome is working really well for me I don't intend to try it unless I need to for some reason. I'm happy with Google Chrome! I used Firefox for the longest time. I liked the look and style of it. I never cared for Internet Explorer, it always seem counter intuitive. But now, I am using Chrome, and after a little bit of an adjustment from Firefox, I am liking it the best. Well I used Firefox for a few years and finally switched to Chrome. Now I can't get myself to use anything else, because Chrome pretty much has everything I need. It's also considerably faster than firefox and easier to use. I'd recommend it over firefox anytime. Chrome, but pretty much only because I got used to it and I don't feel like switching to Firefox/Nightly again. Firefox would be probably a better choice overall, though. It's not a Google botnet. I used to love Firefox but I notice in the past 2 months that it's been getting too slow that's why I started switching to Google Chrome. It is ok, but I still like the overall layout of Firefox. I think though that over time, I'm going to get used to it anyway so I just keep on practising with it. I also love my iPhone's Safari. I think it has a very simple layout and I never have problem with its speed. Opera is my browser of choice. I don't even remember why I started using it but here I am using it right now. When a webpage I want to enter isn't supported by Opera I use Chrome, so those are the two I use the most. I have Firefox as the backup but I rarely use it, however my brothers are the ones who use it all the time. I really like simple stuff and because of that's reason that I prefer Chrome over any browsers around as much as I prefer Mac. I like it more than the Safari, because it's very minimal and very easy to navigate around. Unlike others, name it, Safari, IE and Mozilla. I use Firefox currently, but sometimes I switch to Chrome if I'm having a problem with Firefox. I don't have much of a preference either way, but I'm a little hesitant to give Google all my info/browsing history/etc. I've used Opera in the past, it was fine as well. I prefer Firefox. I used to use Internet Explorer, but then I discovered that Firefox is much better, so I switched. I haven't really thought of switching to Chrome. I am quite happy and satisfied with firefox. There are also times that I use Chrome because it has certain features which firefox doesn't have. But all in all, I prefer firefox. Chrome. It's my main browser. I used to have Firefox but now I just went straight to Chrome just because it's faster except for the downloads, it kind of chokes at some point for some reason. Since I am using a Linux system the browsers I can use are rather limited I can only download and use Chrome, Chromium, and Firefox; Fireox comes preloaded on the system There is also a pretty standard Ubuntu web browser which I only found out about a few months ago so I have never used that. I mainly use Chrome, but I will use Firefox every now and then. I used Google Chrome and Mozilla Firefox in Browsing. They are fast and user friendly. I do not like IE. I also like the designs of Google Chrome and Mozilla they are simple. The settings and options of the browser is quite easy to understand. I am once an IT Tester and Google Chrome is quite convenient when I am testing my website and web applications. I ditched Firefox after using it for about 10 years, because it was simply having too many memory problems and the company kept blaming it on extensions when it was clearly a problem with the browser itself that they were trying to deny and keep hush hush. When I realized that all the same extensions I was using in Firefox were now available in Chrome and Chrome could run them all without hogging up several gigs of memory and crashing repeatedly, that was the last straw. Though, these days I am also having problems with Chrome too. I dislike the fact that they use their own custom version of Flash. Flash is already a mess of technology riddled with security issues, using a customized version of it sounds even worse. It's bad enough Adobe keep pushing out security fixes for it several times a month - how often is Google keeping up with these for Chrome's iteration of it? I've also been having a bunch of problems with all browsers in how they handle Silverlight - though that's more of an issue with Microsoft. Pretty much any site I visit these days with Silverlight causes my browser to clam up and freeze indefinitely until I get a message asking me of I want to disable the plugin. I use Internet Explorer for Windows 8.1, not the desktop version but the Metro UI one. Most people say it's better to switch to Chrome, which I did and since it also offers a Windows 8 mode I've long since switched. My favorite browser is Chrome. I have the five major browsers installed on my computer because I'm in website development, so I need to see what websites look like in all browsers. But for me personally I love Chrome for its speed and easy syncing between devices. I am curious to see the new Microsoft browser though, since they are totally redesigning it for Windows 10.
OPCFW_CODE
This is the second article in a two-part series looking at agile documentation. Read Part One here. In this article, I’ll touch on the probably most frequently asked question with regards to documentation: How do you keep it up to date? Before that, let’s dive into our third and fourth goals for documentation. 3. Create empathy Documentation creates understanding - not only an individual’s understanding of the code or the architecture but also understanding in the sense of empathy between the people involved in building it. The empathy between tech decision makers and developers I finished off Part One of the article with an example of how I used a paper widget kit to efficiently explain a data model. That particular kit had a purpose beyond the basic communication of information. The database technology had been quite an important strategic decision in this company, but it was causing a lot of effort for our team. It wasn’t the best fit for the part of the system we were building. The kit and a step-by-step approach gave us an effective and repeatable way to bring across our requirements and create empathy and understanding for our challenges quickly. It helped to keep emotional factors out of the repeating discussions and keep them efficient and fact-based. Empathy among developers Working on software without guidance, without documentation, is anxiety-producing. In her article about "Empathy Driven Development", Duretti Hirpa describes the notion of an “empathetic codebase”. Creating a codebase like that means taking advantage of “any tool that will provide context”, including documentation. “Docs or it didn’t happen”. By creating documentation, we show empathy for our fellow teammates and create an environment where people feel guided and safe when changing the software, especially the newer members of the team. This goal applies particularly to the category of documentation for onboarding and troubleshooting, like “readme” files, or checklists for things that are not automated yet. The empathy between developers and non-developers It’s not uncommon for developers to complain about their product managers, saying they “don’t really understand how the software actually works”. Documentation can help developers bridge that gap. Here is an example: When building a PDF document generator, the product owner stopped by our desks almost every day to report yet another case where they didn’t like how a particular “page break” was placed by the layouting algorithm. Sometimes a fix we did today would bring back a flaw we tackled the day before. To facilitate those discussions, we created a poster outlining the logic of the “page break” algorithm. It helped us show how introducing new rules could invalidate existing rules, and we could then discuss priorities on equal footing. Making the logic transparent helped the users empathize with us because we showed them the complexity in a more understandable way, and they also understood the scope of the implementation better. It made us feel like we were creating this feature as a team, not as two opposing parties of “feature requesters” and “feature implementers”. Identifying candidates for this type of documentation When a lot of bugs are reported, that turns out to be caused by the misunderstanding of a particular feature. Features that are going through a phase of change and consist of an elaborate system of rules and logic. Everybody involved needs a good understanding of that logic to work on the change together. 4. Help our future selves make informed decisions “If you look at another engineer's work and think, ‘That's dumb. Why don't you just…’ Take a breath. Find out why the problem is hard.”Adrienne Porter Felt Creating an overview that demonstrates the complexity of your solution, as just described, also helps create empathy with other technologists. It helps them understand why the problem you’re solving is hard, which is a good foundation for a constructive discussion. But what if you can’t understand why the problem is hard, because nobody knows what the problem was in the first place? Architecture decision records In his 2011 blog post about architecture decision records, Michael Nygard writes: “One of the hardest things to track during the life of a project is the motivation behind certain decisions.” But, he adds, unless you understand that rationale, you can only either accept the decision blindly or reject it blindly. Writing down the reasoning, context, and consequences of a decision right after you take it can hugely improve future decision-making. You can record decisions with many tools, from a Word document to a Wiki, to simple text files. I’ve successfully used this little tool to create markdown files that were then version controlled along with the code. However, the tool isn’t so much the challenge here. The difficulty is to get into a routine of creating this really valuable form of documentation. We have to overcome the urge to just get on with the execution once we’ve come to a decision, and pause for a bit of documentation while the details are fresh on our minds. Identifying candidates for decision records Decisions that needed a lot of discussions to get to Decisions that are harder to change Imagine somebody joining your team in the future: Do you think they would challenge this decision without having more context? “Whack-a-mole” decisions: Every two or three months a team member brings them up and wants to change them, only to find out after a day of revisiting that there were actually good reasons for the decision. Having a clear record of the original decision and requirements makes these regular challenges more efficient. It then usually boils down to checking the documented list of requirements in the record and asking “Has the problem or the context changed in any way?” Once you get better at identifying these moments, remember not only to describe the solution you went for, but also the problem and the context. For example, if you write down that you chose technology X because it’s scalable and supports Docker, then you’re actually listing characteristics of your solution, but not really why you need them. Why do you need scalability, why do you need Docker support? That reasoning will help your future self decide if circumstances have changed in a way that allows for a less scalable solution, or for one without Docker. The Nygard-recommended structure of “Context – Decision – Consequences” helps to focus on a problem description. How to keep it up to date? One of the most frequently asked questions with regards to documentation is: How do I keep it up to date? The short answer is: You probably don’t - at least not 100%. In the end, anything that is not necessary to keep the software running will ultimately be out-of-date to some degree, so executable documentation forms like readable code, tests and scripts are important foundations. The techniques described here are all just complementary to that. The following are a few principles I use to try and keep the more high-level, non-executable documentation reasonably up to date. Create as little as possible Having as little documentation as possible is the only long-term protection for outdatedness. And the more details you add, the higher the probability that you will have to update them soon. Again, think about the value a piece of documentation or detail brings, and remember that you will have to maintain it, just like every line of code you’re writing. Also, don’t be afraid to throw things away! Some of the examples described earlier are actually only useful temporarily, it’s okay to throw them out when they aren’t helpful anymore. Include documentation grooming in team rituals Find ways to include the grooming of your documentation in your team rituals. I find a weekly “developer huddle” useful for this. This is a meeting in which the team discusses technical topics and questions that have come up during the week. You should also make good use of team member rotations to check the current usefulness and understandability of your documentation. Make it visible If a piece of documentation is buried in some corner of your wiki that you rarely go to, then it will most certainly wither away. If it’s up on the wall on the other hand, for everybody to see, or in a “readme” file that every new team member comes across, outdatedness will be noticed and corrected more frequently. Create ownership through collaboration Make sure the creation of documentation is a collaborative activity, to create collective ownership. This will increase the probability that all team members have an eye on how up to date the docs are, and correct them when necessary. If only one person on the team is creating documentation, they won’t keep up in the long term. Agile + Documentation = <3 Ultimately, the code is the only truth describing the current state of your systems. High-level documentation serves as maps to find entry points and your way around. And you need to put special emphasis on documenting the things that code cannot tell you: History and context. The self-documenting code does not help you challenge past decisions, or understand why things were done a certain way. Therefore, at the minimum, consider writing decision records, and take good care of your commit messages. I hope I could show with these examples that the process of creating and grooming documentation can be a great catalyst for some of the other agile principles, like “Simplicity”, “Business people and developers work together daily”, “Attention to technical excellence and good design”, or “The team regularly reflects on how to become more effective”. If you’re somebody who tends to create a lot of documentation, and you want to reduce some waste: A focus on the value documentation will help you prioritise. If you’re somebody who creates very little documentation, reflect on if you’re missing some of the values mentioned, and what types of documentation would improve your team’s effectiveness. Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.
OPCFW_CODE
Why doesn't the OpenMP atomic directive support assignment? The atomic directive in openmp supports stuff like x += expr x *= expr where expr is an expression of scalar type that does not reference x. I get that, but I don't get why you can't do: #pragma omp atomic x = y; Is this somehow more taxing cpu instruction-wise? Seems to me that both the legal and illegal statement loads the value of x and some other scalar value, changes the register value of x and writes it back. If anyone could explain to me how these instructions are (I assume) fundamentally different I would be very grateful. This is now possible in OpenMP 3.1 --> atomic update Because the suggested atomic assignment does not protect against anything. Remember that an atomic instruction can be thought of as a critical section that could be (but does not have to be) efficiently implemented by the compiler by using magic hardware. Think about two threads reaching x = y with shared x and private y. After all the threads finish, x is equal to the last thread to execute "wins" and sets x to its y. Wrap the assignment in a critical section and nothing changes, the last thread still "wins". Now, if the threads do something else with x afterwards the slowest thread may not have caught up and even if it has the compiler could legitimately end up using choosing to some cached value for x (i.e. the thread's local y). To avoid this, you would need a barrier (so the winning thread has won) and its implied flush (so the local cache has been invalidated): x = y; #pragma omp barrier \\ do something with shared x... but I cannot think of a good reason to do this. Why do all the work to find y on many threads if most of them will be (non-deterministically) thrown away? Atomic assignment is useful. If the type of the value of X and Y is "large" in any sense, doing an atomic copy ensures that the resulting value X doesn't contain an inconsistent picture of any value Y that might be copied. You also need, of course, "atomic read" (you suggested a barrier) to make sure that fetching a component of X doesn't get you a part of one value, as your race situtation above describes. What I would have said is that the overhead for protecting individual copies is in general pretty high, and this might not be that helpful from a performance point of view. -1: This answer is incorrect. The OpenMP 3.0 standard does not guarantee atomicity when reading from or writing to shared variables: “The minimum size at which memory accesses by multiple threads without synchronization, either to the same variable or to different variables that are part of the same variable (as array or structure elements), are atomic with respect to each other, is implementation defined.” OpenMP 3.1 introduced atomic read/write specifically to address this limitation.
STACK_EXCHANGE
Inside Windows, applications use "Windows". You can only access these Windows using API's and lots and lots of documentation. I implemented all useful and working Window API's into a single class. You can do the following with it: - Return all active Windows on the OS - Get the process of a Window (handle) - Set/Get text, location, size, enabled, visible, exstyles, font, window order - Perform commands: cut, copy, close, destroy, bringtofront,sendtoback etc. - Capture the Window into a bitmap - Set transperancy - Get window parent and children using a built-in Enumerator Code is in the attachment. It consists of 2 files: - API.vb: contains direct SendMessage, DefWindowProc, SendInput and key API's - Window.vb: the main class to use for windows. Contains API's that are used internally The Window class only holds a pointer (hwnd), it is not storing any background information about Children or states. When using the same Property more than once at the same time, it is wise to store it in a new Variable: Note: a window is not a form, a window is everything you see. This includes textboxes, buttons, pictureboxes, labels, everything.Code:Dim w As New Window("Form1") 'Good: Dim s As Size = w.Size w.Location = New Point(s.Width, s.Height) 'Bad: w.Location = New Point(w.Size.Width, w.Size.Height) It is pretty cool to enable a disabled button and move on in an app while you are actually not allowed to. Simple usage examples: Getting a window - 'Get the Foreground window - Dim w As Window = Window.GetForeGround - 'Find a window or child: - Dim w As Window = Window.Find("notepad") - Dim c As Window = w.FindChild("edit") - 'Window children - Dim children() As Window = w.Children() - Dim childbyname As Window = w.Children("name") - Dim childbypoint As Window = w.Children(Cursor.Position) - Dim childbyindex As Window = w.Children(2) - 'Get all windows on the current Desktop (OS): - Dim windows() As Window = Window.All - 'Get all windows on the current Desktop (OS) using the running Processes: - Dim windows() As Window = Window.AllFromProcessList() - 'Get all windows belonging to a certain process: - Dim p As Process = Process.GetProcessesByName("notepad").First - Dim windows() As Window = Window.All(p) - 'Get the Window of one of your own Controls or Forms: - Dim MyWindow As Window = Window.FromControl(Me) - 'Get the Next sibling Window above this Window: - Dim nextwindow As Window = w.NextSibling - 'Get the Desktop Window on which all other Windows are drawn: - Dim desktop As Window = Window.GetDesktop() - 'Get the currently focused window control: - Dim focusedwindow As Window = Window.GetFocused() - 'Get the currently focused window inside another Window: - 'Note that it uses the Thread ID, only one control is focused in each program. - Dim focusedwindow As Window = w.FocusedWindow - 'Change the text: - w.Text &= vbNewLine & "New line" - 'Kill the process owner: - Dim p As Process = w.Process - If Not IsNothing(p) Then p.Kill() - 'Toggle the enabled state: - w.Enabled = w.Enabled = False - 'Move it around and resize - w.Location = New Point(100, 100) - w.Size = New Size(300, 200) - w.Bounds = New Rectangle(100, 100, 300, 200) - 'Set the Window outside of the screen: - w.Location = New Point(-w.Size.Width - 50, -w.Size.Height - 50) - 'Change the WindowState: - w.WindowState = Window.State.Minimized - w.WindowState = Window.State.Maximized - w.WindowState = Window.State.Normal - 'Perform commands: - 'Set transparency(key): - w.Opacity = 128 '50% - w.TransparencyKey = Color.FromKnownColor(KnownColor.Control) - 'Change (Ex)Styles: - w.ExStyle(Window.ExStyles.TopMost) = True - w.Style(Window.Styles.MaximizeBox) = False - 'Update (redraw) a Window after you performed a Layout change: - Dim imgw As Image = w.CaptureWindow() - Dim imgc As Image = w.CaptureClient() - Dim screenshot As Image = Window.GetDesktop().CaptureClient() - 'Device Context operations (using the HDC of a window): - Dim DC As API.DC = w.GetDC() - Dim c1 As Color = DC.GetPixel(10, 10) - Dim c2 As Color = DC.GetPixel(11, 11) - Dim screenshot As Bitmap = DC.ToBitmap() - DC.SetPixel(5, 5, Color.Black) - DC.Pixel(5, 5) = DC.Pixel(10, 10) - DC.Graphics.FillRectangle(Brushes.Red, 0, 0, 100, 100) - 'Drawing on other Windows using CreateGraphics: - Dim g As Graphics = w.CreateGraphics() - g.FillRectangle(Brushes.Red, 10, 10, 100, 100) - 'Sending keys (no focus required): - w.SendKey(Keys.Control And Keys.A, Window.KeyCommand.KeyEvent.Down) - w.SendKey(Keys.Control And Keys.A, Window.KeyCommand.KeyEvent.Up) - w.SendChars("All text is now this") <15th March 2011> Had to update Window.vb, ExStyle enum was set on Integer while it should've been Long. Also, made sure the handle was passed as HandleRef to either 32 or 64 version of GetWindowLongPtr. Note: This class has been in development and is still under development, so it *could* contain some bugs here and there. <19th March 2011> Got "TransparencyKey" and "Opacity" with internal "AWL" style setting working. Also added "Name" property. Updated the files, no additional functions are used. They are replaced by shared functions inside the Window class. (GetWindows => Window.All) Might get rid of the API class as well, just keeping it since it holds non-window related messages. Good news: I managed to send keys, characters and Control + commands to an external minimized window using PostMessage. Only two arguments will be needed: the key code and the key state (down/up). <23th March 2011> Added "SendKey", "SendChar" and "SendString" functions to send input to other windows. Also added "Find" functions for children and all windows on the OS. API now contains an INPUT structure as well that uses SendInput. <28th March 2011> Added GetPixel, Client capture and Client bounds, plus the functions "Next/Previous/Top/BottomChild" for returning Window siblings in the Z-order. <3th April 2011> Added Input attaching, Focus functions, a DC class for handling Device Context operations, both a Window and Client DC attribute and simply added a lot of classes/functions inside the API class.
OPCFW_CODE
Omnithread unobserved nested thread not released when expected Background My Flagship App seems to leak memory. Well, not really but just during runtime. Investigating this showed the memory 'leak' is resolved when the app closes, but not in between. I am using a background thread , that iself starts another thread. Both threads (parent and child) are 'Unobserved' which should result in the TOmniTaskControl object to be released and freed when the tread is finished. Experimenting code procedure TfrmMain.MyTestProc(const aTask: IOmniTask); begin sleep(100); SGOutputDebugStringFmt('%s.MyTestProc for %s',[ClassName,aTask.Name]); sleep(100); end; procedure TfrmMain.MyNestedTestProc(const aTask: IOmniTask); var lTask:IOmniTaskControl; begin sleep(100); SGOutputDebugStringFmt('%s.MyTestProc for %s',[ClassName,aTask.Name]); lTask:=CreateTask(MyTestProc,'NestedTask'); lTask.Unobserved.Run; sleep(100); end; procedure TfrmMain.btSimpleThreadClick(Sender: TObject); var lTask:IOmniTaskControl; begin lTask:=CreateTask(MyTestProc,'SimpleThread'); lTask.Unobserved.Run; end; procedure TfrmMain.btNestedThreadClick(Sender: TObject); var lTask:IOmniTaskControl; begin lTask:=CreateTask(MyNestedTestProc,'NestedThread'); lTask.Unobserved.Run; end; When debugging, and setting breakpoint on TOmniTaskControl.Destroy, and having a watch TOmniTaskControl.Name on I see the following: btSimpleThreadCLick: TOmniTaskControl for 'SimpleThread' gets to be created TOmniTaskControl for 'SimpleThread' gets to be destroyed btNestedThreadCLick: TOmniTaskControl for 'NestedThread' is Created TOmniTaskControl for 'NestedTask' is Created TOmniTaskControl for 'NestedThread' is Destroyed Problem: TOmniTaskControl for 'NestedTask' is NOT destroyed. Another issue is that the OnTerminate isn't called either. Then, when closing the app, the TOmniTaskCOntrol for 'SimpleThread' is destroyed. (And also, the OnTherminate is fired) Workaround I came up with this solution which seems to do the trick. The thing is however, I do usually NOT run my subthreads from a form where a TOmniEventMonitor is at hand. So I'd have to create a global TOmniEventMonitor object for this. But isn't this the whole point of the UnObserved method? procedure TfrmMain.MyNestedTestProc(const aTask: IOmniTask); var lTask:IOmniTaskControl; begin sleep(100); SGOutputDebugStringFmt('%s.MyTestProc for %s',[ClassName,aTask.Name]); lTask:=CreateTask(MyTestProc,'NestedTask'); lTask.MonitorWith(OmniEventMonitor).Run; // OmniEventMonitor is a component on my form sleep(100); end; Well, secondary workarounbd... kind-of. It does not allow for my thread to be freed unattended. if I change the NestedTestProc to the code below, then the NestedTaskgets to be destroyed at the expected moment. Unfortunately, this solution is clearly not 'Unobserved' procedure TfrmMain.MyNestedTestProc(const aTask: IOmniTask); var lTask:IOmniTaskControl; begin sleep(100); SGOutputDebugStringFmt('%s.MyTestProc for %s',[ClassName,aTask.Name]); lTask:=CreateTask(MyTestProc,'NestedTask'); try lTask.Run; lTask.WaitFor(2000); finally lTask:=nil; end; sleep(100); end; Update 20201029 >> An invalid handle (1400) error typically occurs when the master task was completed and its associated Monitor was already destroyed. So - The 'Master' task thread should not die in case there is an "owned" monitor that is monitoring other threads. So to check this, I changed the timing (using sleep()) to ensure the child task was completed before the master task is completed. The Invalid handle error is gone now, and the COmniTaskMsg_Terminated message gets to be posted successfully. But still the ComniTaskMsg_Terminated from the child task is not processed. (I expected the thread of the MasterTask to handle this.) IMO there are 2 problems: life time management of the Unobserved monitor shutdown management of the Unobserved monitor, which should keep the "owning" thread alive and keep processing messages until all monitored threads/tasks are gone. Also I wonder whether these shutdownmessages hould be handled/processed by the Application main thread (It seems this is the case now) or otherwise through a separate thread that checks all the monitors in GTaskControlEventMonitorPool . AIA, pretty complicated stuff :s... Giving this some thought, monitors that were created by the application main thread (thus Monitor.ThreadID=MainThreadID) should be handle their messages in the main thread message loop, and all others probably need to be handled by a separate thread... Its just too confusing! I will see if I write a unit test for this, just to demonstrate what I expect to happen. << Update 20201029 The question With OmniThreadLibrary, How can I use unobserved threads inside threads, and avoid the described memory leak? The help mentions "This method makes the task being observed by an internal monitor." Looks like that internal monitor is holding onto / keeping it alive longer than you want. I thought at first about a non-released interface caused by RSP-29564, but this adjusting my code didn't help. Using the TOmniEventMonitor component on my main form made the problem go away. I also noticed the release call comes from a message handler DsiWIn32. Except for when it is a nested thread using the UnObserved method. @Brian - exactly.
STACK_EXCHANGE
Abstract exception super type If throwing System.Exception is considered so bad, why wasn't Exception made abstract in the first place? That way, it would not be possible to call: throw new Exception("Error occurred."); This would enforce using derived exceptions to provide more details about the error that occurred. For example, when I want to provide a custom exception hierarchy for a library, I usually declare an abstract base class for my exceptions: public abstract class CustomExceptionBase : Exception { /* some stuff here */ } And then some derived exception with a more specific purpose: public class DerivedCustomException : CustomExceptionBase { /* some more specific stuff here */ } Then when calling any library method, one could have this generic try/catch block to directly catch any error coming from the library: try { /* library calls here */ } catch (CustomExceptionBase ex) { /* exception handling */ } Is this a good practice? Would it be good if Exception was made abstract? EDIT : My point here is that even if an exception class is marked abstract, you can still catch it in a catch-all block. Making it abstract is only a way to forbid programmers to throw a "super-wide" exception. Usually, when you voluntarily throw an exception, you should know what type it is and why it happened. Thus enforcing to throw a more specific exception type. +1 "The best way to prevent incorrect use is to make such use impossible." - Scott Meyers "Would it be good if Exception was made abstract?" - Abslutely not, because all of the existing .NET code which uses new Exception("...") would break. However, "Should the .NET team have made Exception abstract to begin with?" - probably, yes, but it's >10 years too late now :) I don't know the actual reasons why it was done this way, and to a certrain degree I agree that preventing infintely wide exceptions from being thrown would be a good thing. BUT... when coding small demo apps or proof-of-concepts, I don't want to start designing 10 different exception sub-classes, or spending time trying to decide which is the "best" exception class for the situation. I'd rather just throw Exception and pass a string that explains the details. When it's throw-away code, I don't care about these things and if I was forced to care about such things, I'd either create my own GenericException class and throw that everywhere, or move to a different tool/language. For some projects, I agree that properly creating relevant Exception subclasses is important, but not all projects require that. I completely agree with you, but the question was implicitly thinking about non-trivial projects. It's also the kind of issue that humans easily work around. Once Exception is no longer available as a quick option (such as in demo's), then people will start using some other exception type as their goto "some exception" type. Possibility A: It's logically correct. You are correct that Microsoft, along with many others do not suggest throwing a new Exception() directly for a plethora of reasons. That being said, we can consider the academic ideal that the purpose of a the Exception class hierarchy is to define a narrowing effect so that only the most specific exceptions can be caught. (i.e. ArgumentNullException is narrower than ArgumentException). The Exception class is no exception (pun not intended). It is intended to be the broadest possible exception, a "super exception" that almost cannot be caught because its scope is infinitely wide. 'Exception' it is not abstract in the sense that Exception cannot exist as an entity on its own. It can (though admittedly no good cases are defined yet - see Possibility B), and therefore should be publicly constructible. The 'abstract' keyword (in a purely academic sense) is only applicable when the base class makes no sense on its own - i.e. FourLeggedAnimal. All of this in mind, there would be no technical reason to make the class abstract, other than to be a source of aggravation to developers. Possibility B: Design Lock-in / They didn't know If MS made this class abstract, they may have been in trouble if they changed their mind down the road, as this class is very essential to the fundamentals of the language. They already goofed with ApplicationException, so it's foreseeable that they anticipated a change in recommendation down the road, too. (see Link) There may be other reasons (I am thinking maybe this has to do with Reflection, or some other technical reason), so I am making this post a CW. "Just because it is "super-wide", it is not abstract." ... But that's the OP's argument isn't it. Shouldn't it be abstract in the sense that some indication of the type of exception always needs to be passed. Good point, I will update my answer accordingly - the nature of this question is a little confusing because 'abstract' is both a language keyword as well as the name of the concept being discussed. @KevinMcCormick : The term 'abstract' here is used as the language keyword. Please tell me how I can change my question so it is clearer for future readers ? I think it's a great question. Any OO conversion regarding the abstract keyword hits this wall. Not sure of any way to get around it other than denoting the abstract keyword with the backticks. How is "disabstracting" a class a breaking change? @configurator, Good point, I am actually not sure if it would be technically a breaking change, though this would certainly be a pain if say, they 'disabstracted' it in 2.0 but not in 1.1 (even worse in the other direction). Regardless, my thought is that it would be a breaking change in a recommendation sense. If it were abstract, MS wouldn't be able to later on say, "nevermind, go ahead and create instances of Exception - oh wait, you can't because we made that assumption during the design of the FX". The Design Guidelines, fxCop, etc. all came out after .NET 1 was released. Previously I searched high and low for the "The 'abstract' keyword (in a purely academic sense)", and I found no such definitions. Fair enough, I can't find any formal definition either. I guess it was based purely on my idea that an abstract class defines an incomplete object, where inheritance is necessary in order for it to be functional. Exception does not require any added functionality in order to be functional - it is only a recommendation (albeit a strong one) that a subclass be used.
STACK_EXCHANGE
Machine Learning Program Vol. 3 Award-winning project oriented, Lead engineer from Microsoft as instructor by completing this program, you will become an industry job ready candidate what you will learn during this program is 100% what you will be doing in your first job program begin in: Machine Learning - Future Artificial Intelligence Artificial intelligence will shape our future more powerfully than any other innovation of this century. Anyone who doesn't understand it will soon find that he feels forgotten and wakes up in a world full of technology that feels more and more like magic. In 2015, Google trained a conversational agent (AI) that not only convincingly interacts with humans as a technical support desk, but also discusses ethics, expressing opinions and answering general questions based on facts. Machine learning + artificial intelligence Machine learning is one of many sub-areas of artificial intelligence, involving the way computers learn from experience to improve their ability to think, plan, decide, and act Its goal is to enable computers to learn on their own. The machine's learning algorithm enables it to identify patterns in the observed data, build models that explain the world, and predict things without explicit pre-programmed rules and models. Machine learning workflow The machine learning workflow is the process required to execute a machine learning project. While individual projects may vary, most workflows share several common tasks: problem assessment, data exploration, data preprocessing, model training/testing/deployment, and more. You will find useful visualizations of these core steps below: Machine learning application As already mentioned, machine learning is a technology that helps develop and improve many of the features we are used to. Let's consider the ones we most often encounter when we use our favorite applications on a daily basis, and finally understand the advantages of machine learning technology. They use algorithms through acoustic and language modeling. Acoustic modeling represents the connection between language units of speech and audio signals, and language modeling matches sounds to word sequences to distinguish words that sound similar. This allows users to talk to their computers and convert their text into text through word processing and speech recognition. You can access feature commands such as setting an alarm, opening a file, booking at your favorite restaurant, and more. On the other hand, some mobile applications are used for precise business settings, such as medical or legal transcription. This feature is very common in mobile apps. For example, it is sometimes used for identification purposes or to use photos that include filters and edits. In addition, using different types of machine learning algorithms, you can define the user's gender and age in the application to achieve eye retina or fingerprint recognition. A good example of a machine learning application is the identification of license plate violations on the road on the road. Intelligent data analysis The more you collect and process user data in tandem big data and machine learning, the more you know the features they often use or not use at all. This way, you can expand your understanding of your audience and adapt your app to your liking to make your service even better. This combination has been applied by Amazon and Google in some of their services. Python or R The R language is very popular among data scientists, but if you need regular programming, it has many drawbacks. On the other hand, Python is a general-purpose programming language that can be applied to many use cases. This may be the main reason why Python has become a machine and deep learning field in recent years. Each decent library provides the Python API or treats it as the only target language. Python is a very simple and beginner-friendly language. Moreover, there is no need to know all the complexity of the language to apply it to ML. In traditional programming, most of the time is spent in a text editor or IDE, and in data science, most of the code is written in Jupyter Notebook. It is a simple and powerful tool for patching data analysis problems. It allows you to write Python code, text descriptions, and embed charts and charts directly into interactive web pages. To make things easier, Google created a free service, Google Colab, which provides CPU resources and access to the GPU unit, which is handy when dealing with neural networks and deep learning. Scikit-learn is one of the most popular ML libraries today. It supports most classic supervised and unsupervised learning algorithms: linear and logistic regression, SVM, naive Bayes, gradient enhancement, clustering, k-means, and more. In addition to different ML models, Scikit-learn provides a variety of data preprocessing and results analysis methods. Scikit-learn focuses on the classical ML algorithm, so its support for neural networks is very limited and cannot be used for deep learning problems. PyTorch is a popular deep learning library built by Facebook. In addition to the CPU, it also supports GPU accelerated computing. The library is dedicated to providing users with a fast and flexible modeling experience and has gained great appeal in the deep learning community. Matplotlib is the standard tool in every data scientist toolbox. It provides the ability to draw many different types of graphs and charts to display results. Matplotlib diagrams can be easily embedded in Jupyter Notebook. This way, you can always visualize the data and results you get from the model. Program Framework Summary Despite a large ecosystem, the most powerful tools in the world of machine learning and artificial intelligence are the most widely used tools. When you're just getting into a machine learning journey, it's a good idea to choose from these tools as they are widely introduced in a variety of tutorials, stepping-out courses, and have good community support. To simplify the learning complexity, you can start with the classic ML algorithm and pay more attention to the use of Scikit-learn. After learning more about it, continue to use Tensorflow, PyTorch or Keras and enter the world of deep learning. - Program Date and Time - Program Duration - 2.5 hours for each checkpoint, 30 hours in total - Experienced in any programming language(e.g. Java, C, Python) - Program instruction - Live Broadcast, students must be on time. (there is no refund for missing any lesson) Who should take this course For students who has no professional IT background, or just getting into IT study Want to get your first job in IT company, but don't know how to prepare for interview don't know where to start, even there are thousands of courses online... don't know how to communicate with interviewers or struggling in technical interviews Want to get the most updated interview questions and mocks |Week||Content||Melbourne Time||Sydney Time| |1||Python - A Modern Multi-purpose Language||2019/11/11 17:00:00||2019/11/11 20:00:00| |2||Python - Jupyter Notebook||2019/11/16 17:00:00||2019/11/16 20:00:00| |3||Supervised learning, regression, housing, bitcoin price forecasting||2019/11/23 17:00:00||2019/11/23 20:00:00| |4||Perceptron, neural network, bagging, handwriting recognition||2019/11/30 17:00:00||2019/11/30 20:00:00| |5||Unsupervised learning, image compression, shared bicycle prediction||2019/12/07 17:00:00||2019/12/07 20:00:00| |6||Reinforcement learning, gradient strategy, maze challenge||2019/12/14 17:00:00||2019/12/14 20:00:00| |7||Deep learning, multi-layer neural networks, predicting comments and feelings||2019/12/21 17:00:00||2019/12/21 20:00:00| |9||Machine learning project summary and presentation, preparing for the first job||2019/12/28 17:00:00||2019/12/28 20:00:00| Ex-Microsoft Engineer/Team Lead p.h.D. in CS,20+ offers from Microsoft, ANZ, NAB...,Lecturer/Head tutor in RMIT and Melb Uni. 10 years+ experience as both interviewee and interviewer IT Consultant / Senior Web Dev Expert in all web full-stack system design and implementation Lead team with .Net Core + React for commercial web application Australia Top Senior/Lead Engineers Provide systematic and personalised IT training with job oriented program There are no shortcuts, no free lunch, only hard-working to thrive IT training program. Top Industry Project/Lecturer Driven Melbourne U, Monash, Deakin, RMIT's best lecturer here to help you. Massive practice after each class No more struggling, all real-world industry projects based. What you have been trained is what you will do in your next job. Question & Answer Ensure that each student's questions are answered professionally. No more struggle about which one is the correct answer Find Partners Looking For Job Together Exclusive VIP group of enrolled students, internal referral, job seeking advises. Python - A Modern Multi-purpose Language Python - Jupyter Notebook Supervised learning, regression, housing, bitcoin price forecasting Perceptron, neural network, bagging, handwriting recognition Unsupervised learning, image compression, shared bicycle prediction Reinforcement learning, gradient strategy, maze challenge Reinforcement learning, gradient strategy, maze challenge Machine learning project summary and presentation, preparing for the first job - 9 week hands on job experience,updated interview information - Q&A by top senior/lead engineer,no more struggling for correct answer - Exclusive VIP group of enrolled students,job seeking advises
OPCFW_CODE
# import numpy as np from libtbx import easy_pickle #from test3_logarithm import * def exgauss_pdf(data, mu,sigma, tau): y = [] for x in data: y.append(np.exp((sigma*sigma-2.0*tau*(x-mu))/(2.0*tau*tau))*(1e-15+1.0-math.erf((sigma*sigma-tau*(x-mu))/(sigma*tau*math.sqrt(2))))) return np.array(y) data = easy_pickle.load('merged_images.pickle') reflections = len(data.keys()) Scaled_Intensity = [] # play around with index 0 #for datapt in data[data.keys()[0]]: # Scaled_Intensity.append(datapt[0]) # Print out all the reflections in separate files for reflection in data.keys(): fout = open('reflections/intensities_%s_%s_%s.dat'%(reflection[0],reflection[1],reflection[2]),'w') fout.write('# Miller Index %s %s %s\n'%(reflection[0],reflection[1],reflection[2])) for datapt in data[reflection]: fout.write("%12.8f\n"%datapt[0]) fout.close() exit() print len(Scaled_Intensity) total_instances = len(Scaled_Intensity) # Print out this Scaled intensity and exit for now fout = open('reflection.dat','w') for entry in Scaled_Intensity: fout.write("%12.3f\n" %entry) exit() import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt #n, bins, patches = plt.hist(Scaled_Intensity, 10,facecolor='green', alpha=0.75) #plt.xlabel('Intensity') #plt.ylabel('Frequency') #plt.title('Intensity distribution for miller indices %s' %str(data.keys()[0])) Scaled_Intensity = np.array(Scaled_Intensity) mu = np.mean(Scaled_Intensity)-skewness(Scaled_Intensity) tau = np.std(Scaled_Intensity)*0.8 sigma = np.sqrt(np.var(Scaled_Intensity)-tau*tau) nsteps = 50000 maxI = np.max(Scaled_Intensity) minI = np.min(Scaled_Intensity) print 'initial guesses', mu,sigma,tau params = sampler(Scaled_Intensity, samples=nsteps, mu_init= mu,sigma_init = sigma,tau_init = tau,proposal_width = 0.001*np.abs(maxI-minI),plot=False) mu,sigma, tau = params[-1] print 'final parameter values ',mu,sigma, tau X1 = np.arange(min(Scaled_Intensity), max(Scaled_Intensity),100.0) for count in range(40000, nsteps,10000): # x = np.arange(minI, maxI, 100.0) mu,sigma, tau = params[count] # y = total_instances*exgauss_pdf(x, mu, sigma, tau) # l = plt.plot(x,y, 'grey', linewidth=0.3) # Get CDFs #X1 = np.arange(min(Scaled_Intensity), max(Scaled_Intensity),10.0) F1 = exgauss_cdf(X1,mu,sigma,tau) plt.plot(X1,F1,'grey',linewidth=0.3) X2 = np.sort(Scaled_Intensity) F2 = np.array(range(len(Scaled_Intensity)))/float(len(Scaled_Intensity)) plt.plot(X2,F2,'o') #plt.plot(Scaled_Intensity) #plt.ylabel('Intensity') plt.show()
STACK_EDU
Haskell: Parallel code is slower than sequential version I am pretty new to Haskell threads (and parallel programming in general) and I am not sure why my parallel version of an algorithm runs slower than the corresponding sequential version. The algorithm tries to find all k-combinations without using recursion. For this, I am using this helper function, which given a number with k bits set, returns the next number with the same number of bits set: import Data.Bits nextKBitNumber :: Integer -> Integer nextKBitNumber n | n == 0 = 0 | otherwise = ripple .|. ones where smallest = n .&. (-n) ripple = n + smallest newSmallest = ripple .&. (-ripple) ones = (newSmallest `div` smallest) `shiftR` 1 - 1 It is now easy to obtain sequentially all k-combinations in the range [(2^k - 1), (2^(n-k)+...+ 2^(n-1)): import qualified Data.Stream as ST combs :: Int -> Int -> [Integer] combs n k = ST.takeWhile (<= end) $ kBitNumbers start where start = 2^k - 1 end = sum $ fmap (2^) [n-k..n-1] kBitNumbers :: Integer -> ST.Stream Integer kBitNumbers = ST.iterate nextKBitNumber main :: IO () main = do params <- getArgs let n = read $ params !! 0 k = read $ params !! 1 print $ length (combs n k) My idea is that this should be easily parallelizable splitting this range into smaller parts. For example: start :: Int -> Integer start k = 2 ^ k - 1 end :: Int -> Int -> Integer end n k = sum $ fmap (2 ^) [n-k..n-1] splits :: Int -> Int -> Int -> [(Integer, Integer, Int)] splits n k numSplits = fixedRanges ranges [] where s = start k e = end n k step = (e-s) `div` (min (e-s) (toInteger numSplits)) initSplits = [s,s+step..e] ranges = zip initSplits (tail initSplits) fixedRanges [] acc = acc fixedRanges [x] acc = acc ++ [(fst x, e, k)] fixedRanges (x:xs) acc = fixedRanges xs (acc ++ [(fst x, snd x, k)]) At this point, I would like to run each split in parallel, something like: runSplit :: (Integer, Integer, Int) -> [Integer] runSplit (start, end, k) = ST.takeWhile (<= end) $ kBitNumbers (fixStart start) where fixStart s | popCount s == k = s | otherwise = fixStart $ s + 1 For pallalelization I am using the monad-par package: import Control.Monad.Par import System.Environment import qualified Data.Set as S main :: IO () main = do params <- getArgs let n = read $ params !! 0 k = read $ params !! 1 numTasks = read $ params !! 2 batches = runPar $ parMap runSplit (splits n k numTasks) reducedNumbers = foldl S.union S.empty $ fmap S.fromList batches print $ S.size reducedNumbers The result is that the sequential version is way faster and it uses little memory, while the parallel version consumes a lot of memory and it is noticeable slower. What might be the reasons causing this? Are threads a good approach for this problem? For example, every thread generates a (potentially large) list of integers and the main thread reduces the results; are threads expected to need much memory or are simply meant to produce simple results (i.e. only cpu-intensive computations)? I compile my program with stack build --ghc-options -threaded --ghc-options -rtsopts --executable-profiling --library-profiling and run it with ./.stack-work/install/x86_64-osx/lts-6.1/7.10.3/bin/combinatorics 20 3 4 +RTS -pa -N4 -RTS for n=20, k=3 and numSplits=4. An example of the profiling report for the parallel version can be found here and for the sequential version here. How are you running and testing the program? Did you forget a parameter k on the combs function? @pdexter have just updated my question with build/run commands and profiling reports for both parallel and sequential versions What's your main for the sequential version? Thanks. @d8d0d65b3f7cf42 tried to use threadscope with stack without much success (-eventlog flag seems not supported) "not supported" - oh. then this should be reported to stack developers. For further reference, -eventlog flag is indeed supported, just needed to drop --executable-profiling/--library-profiling options. Now I can use ThreadScope, yay! In your sequential version calling combs does not build up a list in memory since after length consumes an element it isn't needed anymore and is freed. Indeed, GHC may not even allocate storage for it. For instance, this will take a while but won't consume a lot of memory: main = print $ length [1..1000000000] -- 1 billion In your parallel version you are generating sub-lists, concatenating them together, building Sets, etc. and therefore the results of each sub-task have to be kept in memory. A fairer comparison would be to have each parallel task compute the length of the k-bit numbers in its assigned range, and then add up the results. That way the k-bit numbers found by each parallel task wouldn't have to be kept in memory and would operate more like the sequential version. Update Here is an example of how to use parMap. Note: under 7.10.2 I've had mixed success getting the parallelism to fire - sometimes it does and sometimes it doesn't. (Figured it out - I was using -RTS -N2 instead of +RTS -N2.) {- compile with: ghc -O2 -threaded -rtsopts foo.hs compare: time ./foo 26 +RTS -N1 time ./foo 26 +RTS -N2 -} import Data.Bits import Control.Parallel.Strategies import System.Environment nextKBitNumber :: Integer -> Integer nextKBitNumber n | n == 0 = 0 | otherwise = ripple .|. ones where smallest = n .&. (-n) ripple = n + smallest newSmallest = ripple .&. (-ripple) ones = (newSmallest `div` smallest) `shiftR` 1 - 1 combs :: Int -> Int -> [Integer] combs n k = takeWhile (<= end) $ iterate nextKBitNumber start where start = 2^k - 1 end = shift start (n-k) main :: IO () main = do ( arg1 : _) <- getArgs let n = read arg1 print $ parMap rseq (length . combs n) [1..n] I understand your point and seems indeed logical, thank you. If you had to to explicitly obtain all numbers (not just the total number of k-combinations), how would you approach this problem? What do you mean by "obtain all numbers"? good approaches for this problem What do you mean by this problem? If it's how to write, analyze and tune a parallel Haskell program, then this is required background reading: Simon Marlow: Parallel and Concurrent Programming in Haskell http://community.haskell.org/~simonmar/pcph/ in particular, Section 15 (Debugging, Tuning, ..) Use threadscope! (a graphical viewer for thread profile information generated by the Glasgow Haskell compiler) https://hackage.haskell.org/package/threadscope Thanks for your recommendation! Actually, I ordered it some days ago and have just received it :) Some homework for the weekend...
STACK_EXCHANGE
import { cleanWhiteSpaces, expandContractions, killUnicode } from '../src/main'; test('Kill Unicode', () => { expect(killUnicode(`it is “Khorzu”`)).toBe('it is "Khorzu"'); expect(killUnicode(`see you …`)).toBe('see you ...'); expect(killUnicode(`don’t`)).toBe(`don't`); }); test('Clean White Spaces', () => { expect(cleanWhiteSpaces(` hello World ! \t \n`)).toBe( 'hello World !', ); }); test('Expand Contractions', () => { expect(expandContractions(`Couldn't've`)).toBe('could not have'); });
STACK_EDU
Use Recursive CTE in DB2 stored proc I have a need to run a recursive CTE within a stored proc, but I can't get it past this: SQL0104N An unexpected token "with" was found following "SET count=count+1; ". Expected tokens may include: "". LINE NUMBER=26. My google-fu showed a couple of similar topics, but none with resolution. The query functions as expected outside of the stored proc, so I'm hoping that there's some syntactic sugar I'm missing that'll let this work. Similarly, the proc compiles and works without the query. Here's a contrived example: --setup create table tree (id integer, name varchar(50), parent_id integer); insert into tree values (1, 'Alice', null); insert into tree values (2, 'Bob', 1); insert into tree values (3, 'Charlie', 2); - - the proc create or replace procedure testme() RESULT SETS 1 LANGUAGE SQL BEGIN DECLARE SQLSTATE CHAR(5); DECLARE SQLCODE integer default 0; DECLARE count INTEGER; DECLARE sum INTEGER; DECLARE total INTEGER; DECLARE id INTEGER; DECLARE curs CURSOR WITH RETURN FOR select count,sum from sysibm.sysdummy1; DECLARE hiercurs CURSOR FOR select id from tree order by id; SET bomQuery=''; PREPARE stmt FROM bomQuery; SET count = 0; SET sum = 0; set total = 0; OPEN hiercurs; FETCH hiercurs INTO id; WHILE (SQLCODE <> 100) DO SET count=count+1; with org (level,id,name,parent_id) as (select 1 as level,root.id,root.name,root.parent_id from tree root where root.id=id union all select level+1,employee.id,employee.name,employee.parent_ id from org boss, tree employee where level < 5 and employee.parent_id=boss.id) select count(1) into sum from org; SET total=total+sum; FETCH hiercurs INTO id; END WHILE; CLOSE hiercurs; OPEN curs; END the cte in db2 doesn't seem to recognize the scalar result of the query, and so it won't let the select into work (not a problem on Oracle or SQLServer)...solution is to open a cursor and FETCH INTO (instead of SELECT INTO) instead. In addition to rjb's suggestion of enclosing the CTE query inside a cursor, you can also stuff the CTE into a user-defined function or a view, and then code a straight select against that object into your stored procedure.
STACK_EXCHANGE
Preparing the Source Image Studio580 allows you to paint in the style of a source image that you provide. Before this is possible, however, you need to furnish Studio580 with a source image in a format that it can use. Later, you’ll incorporate this source image into a brush object that is usable within the Studio580 application. The basic steps of preparing a source image for incorporation into a new brush object are illustrated in the figure below and discussed in the paragraphs that follow. Valid Studio580 brush source images are 24-bit true-color images stored in Truevision TARGA (*.tga) format that are either 32 × 32 pixels, 64 × 64 pixels, or 128 × 128 pixels in size. Usually, these source images are extracted from digitized versions of traditional artwork, as illustrated above. The process of extracting an artwork sample for use with Studio580 is straightforward and requires an image editing application capable of working with TARGA images; either Photoshop or The Gimp works fine. With your image editor at hand, proceed as follows: Extract—choose a square swatch of image that captures the essence of the artistic style or medium you wish to recreate. At this point, it doesn’t matter exactly what size the swatch is or how it’s rotated. For example, in the figure above, the source swatch (shown stroked in red) is square. However, it is neither exactly 32 × 32 pixels, nor 64 × 64 pixels, nor 128 × 128 pixels in size, nor is it rotated in an axis-aligned way. Indeed, the source patch, while square, is rotated about 38 degrees off-axis, and so appears as a diamond. Align—rotate the image as shown at right in the figure above. If the image contains “strokey” or “sketchy” elements that suggest a direction of travel, rotate the image such that the direction of travel lines up left-to-right. Fit—crop and/or scale the image until it is either 32 × 32 pixels, 64 × 64 pixels, or 128 × 128 pixels in size. When you’ve finished these steps and are satisfied with the result, save the image that you’ve created to a TARGA file. At this point, it doesn’t matter where you save it—you can place it anywhere in your filesystem. You now have a valid brush source image in hand. But one more step is necessary before you can incorporate your image into a brush for use within Studio580. Creating the Similarity Map Studio580 brushes require two pieces of data: a source image, as discussed in the previous section, and a similarity map. The similarity map is derived from the source image, and is required by Studio580’s texture synthesis engine. Similarity maps are created via Brush Cartographer, an ancillary application located in the Tools folder (for more information about where things are on disk, see the Quick Tour). Brush Cartographer is straightforward to use. Its user interface is illustrated in the figure below, and discussed in the paragraphs that follow. Brush Source Image—clicking the “Set...” button in this section opens a file chooser, allowing you to select the source image for which you wish to generate a similarity map. Choose the TARGA image file you created in the previous section. Other controls in Brush Cartographer appear grayed-out until you set a source image. Map Construction Parameters—the key control in this section is the “Similarity entries per pixel” slider. For every pixel in the source image, Brush Cartographer makes a list of other pixels whose neighborhoods are perceptually similar; this slider determines the number of entries in that list. More entries produce smoother, more life-like brush surface textures, but can decrease interactive performance. The default value of 4 often results in a good comprise. Ignore the “Coarsest pyramid level considered” slider in this section. It is used for debugging the synthesis algorithm—remember: Studio580 is a research demo, not production software. Variety Boosting—enabling variety boosting requires similarity entries to be separated by some fraction t of the image dimension. For example, if t = 1/20, and the source image is 128 × 128 pixels in size, then similarity map entries must be separated by at least 6 pixels, since 6 = floor(128t). In general, similarity maps generated with variety boosting produce better synthesis results than maps made without variety boosting. However, control of the separation fraction is important: setting it too high can result in “noisy” synthesis results that lack spatial coherency; setting it too low can mitigate the effects of enabling variety boosting in the first place. The default value of 1/20 often results in a good compromise. Map Output File—allows you to change the name of the similarity map file that Brush Cartographer outputs. Usually, renaming the output file here is unnecessary, because Brush Cartographer generates a descriptive name automatically. Brush Cartographer always writes the output file into the same directory that contains the source image. Generate—once you’ve set your desired options, click the “Generate” button to create the similarity map and write it to disk. During map generation, Brush Cartographer hides most of its user interface and displays only a progress bar indicator. When the progress bar disappears and the usual Brush Cartographer user interface returns, your similarity map file is ready for use. Installing the Brush At this point, you should have two files on disk: a TARGA (*.tga) image file encoding the brush source image, and a Brush Cartographer generated map (*.simset) file encoding the similarity map. Follow the instructions in the Readymade Brush Installation Guide to incorporate these files into a brush object that can be used within Studio580.
OPCFW_CODE
Wenzel P. P. Peppmeyer has written not one, not two, not three, but four blogs in the past two weeks, each addressing some feature or quirk of the Raku Programming Language. - The Mysterious Infix (/r/rakulang comments) - Returning the right amount - Watching new arrivals - Ungolden silence (/r/rakulang comments) Vadim Belman tells us about the project they’ve been working on for the past months: Vikna, a console User Interface framework, which is also the subject of a presentation at the Conference in the Cloud (/r/rakulang comments). Brian Wisti explains the developments surrounding the name change of RakudoBrew to Rakubrew, how they initially avoided RakudoBrew for reasons now lost in time. And yes, Jeff Goff is still missed a lot. Have an idea for a cool project using Raku, but do not have the funds to do it? Then perhaps it is time to submit a grant proposal! RakuDist at raku.org The RakuDist project by Alexey Melezhik is now running on the Raku Community infrastructure! If you are a Raku module author, it is now very simple to have your module tested with just about any Raku compiler version. A great tool! Writing blogs with pod6 Aliaksandr Zahatski has written a blog post about what you need to do to publish a blog written in pod6 using Gatsby and Netlify. - andreoss changed all of the internal references of Perl 6 to Raku in the JVM backend specific code. - Suman Khanal has changed many, many references in the internal documentation of NQP and Rakudo. These are entries for Challenge #61 that have Raku solutions: - Weekly Challenge #61 by Luca Ferrari. - The IPv4 Product, Raku Edition by Arne Sommer. - Max Subarray Product and IP Address Partition by Laurent Rosenfeld. - Weekly Challenge #61 by Javier Luque. - Weekly Challenge #61 by Mohammad S Anwar. - Weekly Challenge #61 by Shahed Nooshmand. - Produce Market Protocols by Colin Crain. - IPv4 Partition by Donald Hunter. And these are entries for Challenge #62 that have Raku solutions: - Weekly Challenge #62 by Luca Ferrari. - The Email Queen with Raku by Arne Sommer. - Sort Email Addresses by Laurent Rosenfeld. - Weekly Challenge #62 by Javier Luque. - Weekly Challenge #62 by Mohammad S Anwar. - Weekly Challenge #62 by Shahed Nooshmand. - The Spacequeen’s Sordid Emails by Colin Crain. Challenge #63 is up for your perusal! - Patrick Böker fixed various issues with the Rakudo build on Windows and set up AzureCI test setup for MoarVM, nqp and Rakudo. - Tim Smith fixed an issue with negative width arguments with sprintfand added support for the :chompnamed parameter to - Christian Bartolomäus and Ben Davies fixed various issues on the JVM backend. - Elizabeth Mattijsen sped up various aspects of IO::Pathand added a RAKU_REPL_OUTPUT_METHODenvironment variable to control stringification in the - Sylvain Colinet fixed the is()test sub so that it can take Bufs taking the .rakumethod for comparison. - Tom Browder worked on Damian Conway‘s notes with regards to declarators in - Will Coleda converted the INSTALL.txtfile to an INSTALL.mdfor better rendering. - Jonathan Worthington and Timo Paulssen worked on the new dispatch implementation (aka the Great Dispatcher Overhaul). - And quite a number of other fixes, optimisations, and improvements. Questions about Raku - Split string to fixed length chunks and write in separate line by Suman Khanal. - Where is the documentation of the Unicode characters i.e. »by Konrad Eisele. - Determine if a variable is set by user0721090601. - Why does Rakudo Nilfor a typed and assigned scalar? by Ross Attrill. Seqis already in use/consumed (nested take) by Holli. - Trying out Hyper Operators by skar123. - Grammar: use named regex without consuming matching string by user13195651. - Put named capture from regex in Subsetinto a variable in the signature by Holli. - How can you create a combination scope/trait declarator? by JJ Merelo. .race()example not working by user2944647. - Overriding attribute accessor with Proxyby Khalid Elboray. - Overloading a package funcion to detect no arguments have been used by Antonio Gamiz Delgado. - What’s the real difference between a token and a rule? by Electric Coffee. - How do I get the line number of a pattern match? by s-ro_mojosa. Meanwhile on Twitter - Alt-Entering into double quotes by Jonathan Worthington. - Trans with a hash by Suman Khanal. - Burning forever by Brian Wisti. - Legacy handcuffs by Jack Merlot. - RakuDist returning by Alexey Melezhik. - About Python? by Jeroen Baten. - Daily Cases constant by Andrew Shitov. - Still doing… by めたつか. - Flourishing! by Moritz Lenz. - Some kind of superhuman? by Mohammad S Anwar. - Much beautiful by Alex Felici. - Just like any method by Khalid Elboray. - Replacing URL parameters by Khalid Elboray. - Like the spokesbug by nil2. - On testing scripts by JJ Merelo. - More on testing scripts by JJ Merelo. - CRUD with Cro::HTTPby Khalid Elboray. - No weekly by Elizabeth Mattijsen. - Vikna covered by The Perl Shop. - Rakudist live again by Alexey Melezhik. - In one click! by Alexey Melezhik. - Managed to release by Andy Rennard. - Coming for 20 years now by Trix Farrar. Physics::Unitedge by The Perl and Raku Conference. - Raku has Rationals by Dan Kogai. - Improving the perception by The Perl and Raku Conference. - Readable files by Mohammad S Anwar. - Another one-liner by Mohammad S Anwar. - Tweaking your own clone by Khalid Elboray. - As it if was written by Mohammad S Anwar. - Learned a lot by Samuel Chase. - Missed it by Mohammad S Anwar. - Wauw, wauw, wauw! by Arjan Widlak. - A virtual lightning talk by The Perl and Raku Conference. - More fun by Joaquin Ferrero. - Learning many languages by 嘟嘟噜. Meanwhile on perl6-users - Help with grammar by David Santiago. - Regexps using $by Joseph Brenner. - Flycheck-raku module for MELPAby Xin Cheng. Comments about Raku - Surprised by consistency by Brad Gilbert. - Shame if no one copied by Brad Gilbert. - Times it is faster by Brad Gilbert. - Learn new paradigms by zubenel0. - Happening with Raku by Ralph Mellor. - Mostly not used by Elizabeth Mattijsen. - Coroutines on top of scoped continuations by Ralph Mellor. - More appropriate for Raku by 1nickt. - Enticing by johnfrazer783. - A full parser by ben509. Supplycovers this use case by Ralph Mellor. /xshould be the default by riffraff. - On member syntax by Ralph Mellor. - On malleability buy Ralph Mellor. - Decided to rename by vasco. - Appropriate features by Ralph Mellor. - Sad it took so long by jhoechtl. - On combining grammars by Ralph Mellor. - And Raku, and Haskell by mfontani. - An inspiration? by Elizabeth Mattijsen. - No one is forced by mirchibajji. - Other by Elizabeth Mattijsen. New Raku Modules - Test::Script by JJ Merelo. - Radamsa by Dave Lewis. - Vikna by Vadim Belman. - Spreadsheet::XLSX by Jonathan Worthington. - Text::Slugify by Khalid Elborey. - Native::Compile by Curt Tilmes. - SDL2-ttf by Steve Schulze. - Set::Equality by Elizabeth Mattijsen. Updated Raku Modules - Shelve6 by Robert Lemmen. - Test::Async by Vadim Belman. - Lumberjack, FastCGI::NativeCall by Jonathan Stowe. - Gnome::N, Gnome::GObject, Gnome::Gtk3, MongoDB by Marcel Timmerman. - Term::Choose by Matthäus Kiem. - LibXML, Font::FreeType by David Warring. - Sparrowdo, Sparrow6 by Alexey Melezhik. - Log, Log::Colored, Log::Simple, Log::JSON by Patrick Spek. - Console::Blackjack by Greg Donald. - Email::Mime by Rod Taylor. Sorry to have been away for a week, but last Monday simply was not a good day for yours truly to be writing a Rakudo Weekly News. The past 2 weeks have brought a very nice batch of new and updated modules, blog posts and very nice additions / speedups. Keep spreading the word about Raku! And see you in health and good spirits next week!
OPCFW_CODE
how to overwrite the data in hive using sqoop I am trying to load data into an already existing table in hive via sqoop from mysql database. I am referring to the below guide for reference:- http://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html#_importing_data_into_hive --hive-import has been tried and tested successfully. I created a hive table as below:- create table sqoophive (id int, name string, location string) row format delimited fields terminated by '\t' lines terminated by '\n' stored as textfile; Loaded the data as required. I want to use --hive-overwrite option to overwrite the content in the above table. As per the guide mentioned above - "--hive-overwrite Overwrite existing data in the Hive table." "If the Hive table already exists, you can specify the --hive-overwrite option to indicate that existing table in hive must be replaced." So I tried the below queries separately to get the result:- sqoop import --connect jdbc:mysql://localhost/test --username root --password 'hr' --table sample --hive-import --hive-overwrite --hive-table sqoophive -m 1 --fields-terminated-by '\t' --lines-terminated-by '\n' sqoop import --connect jdbc:mysql://localhost/test --username root --password 'hr' --table sample --hive-overwrite --hive-table sqoophive -m 1 --fields-terminated-by '\t' --lines-terminated-by '\n' but rather than replacing the content in the existing table it just created a file in the below path /user/<username>/<mysqltablename> Can somebody please explain me where I am going wrong? the first query should work fine. I didn't give fields terminated and lines terminated as the schema already exists. the keywords --hive-import and --hive-overwrite should be there. if only --hive-overwrite is there, it doesn't load data to the table. just copies to hdfs. sqoop import with out --target-dir OR --warehouse-dir (for --hive-import) will import /user/<username>/<mysqltablename>: By default, Sqoop will import a table named foo to a directory named foo inside your home directory in HDFS. For example, if your username is someuser, then the import tool will write to /user/someuser/foo/(files). You can adjust the parent directory of the import with the --warehouse-dir argument. You can also explicitly choose the target directory with --target-dir param but as @hrobertv said that --hive-overwrite does not delete existing dir but it overwrites the HDFS data location of hive table. if you want to save new data at same location as origin than you would have to delete the existing table dir first and then run sqoop import with specifying --target-dir OR --warehouse-dir for --hive-overwrite to store data at specific location as per your requirement... It's putting the _SUCCESS file in /user/<username>/<mysqltablename> You can change where that goes with --warehouse-dir ex: --warehouse-dir /tmp One would think that hive-overwrite would handle this, meaning remove that directory first. But for good reason Hive doesn't want to start removing dirs in HDFS. What if something else was put in there? hive-overwrite is saying, "I'm going to overwrite the rows in Hive, not just add to the table." Thus you will not have duplicates. You have to remove that directory and the _SUCCESS file first; or better yet, right after the import is successful. hadoop fs -rm -R /user/<username>/<mysqltablename>
STACK_EXCHANGE
Good job blizzard, good job what topics are being censored…? The ones about HK. Those I replied to yesterday are all gone. because those are all against community guidelines, the fact that they let those topics remains for a while was actually a bit weird. and expect this topic to also get removed, because questioning moderator’s decisions on thread deletion and such are also against community guidelines. They arent gone unless they completely violated the rules of the forums, if they didnt violate the rules than they were moved to this thread which is acting as a megathread for all the posts to help reduce the amount of individual topics about it on the forums I think you confuse censoring with cleaning up spam…most threads are just moved to the megathread…yo want to see censorship…check out the Chinese social media sites Just keep posting new ones and forcing them to take them down or merge them with other threads, it keeps the issue in the public eye and makes Blizzard look even worse. I’m glad people are finally waking up to the abuses of this company on free speech, it has been a long time coming. They got merged with the megathread. You should even get notifications for the merging if you replied to those threads. It’s not because you’re fighting for a good cause that it’s suddenly ok to start lying. Having a video game discussion forum cleaned up of annoying redundant HK posts is hardly censoring. They would remove a bunch of spam posts about any off topic trend. Or on topic…see mercy Most topics got moved here: OP’s not really wrong, mega-threads (especially with the whole pageless format) might as well be censorship. It’s just spam cleanup…heck the community does more censorship than blizz does with the flagging feature which ACTUALLY does make content unviewable I suppose people have different definitions of spam. As I see it, while some threads are legit spam, every-time a mega-thread situation happens the mods shove completely legit and high effort thread in too. Otherwise, if the definition is just talking about the same topic over and over again, they should be making nerf megathreads as well yet that never happens. They only do this with very specific topics, topics that are inconvenient for Blizzard. It’s pretty obvious the goal is to kill the conversation. Like I said above they also do it for completely relevant topics as well…let’s see…we had several mercy megathread, we have a bastion megathread, a symmetra megathread… Plus You don’t need 30 different threads all saying the same exact thing…it just becomes spammy You realize you just named three messy reworks that made people desperately unhappy and Blizzard wanted shut up right? Especially the first? Meanwhile, where are the role queue mega-threads? Where’s the megathread for the Moira nerfers? Hell, where was the megathread for all the mass rez hatred that led to the rework in the first place?
OPCFW_CODE
GOODFEEL® Copy Protection Problems With the release of version 2.6, GOODFEEL no longer relies on the use of a floppy disk to install an authorization key. If you have Windows NT, 2000 or XP you must upgrade to GOODFEEL 2.6. Even if you use other versions of Windows, there are other good reasons Copy Protection Update Our copy protection vendor has updated their product and fixed a few bugs. If your GOODFEEL version number is less than 2.52 then please: - Insert your GOODFEEL installation or authorization floppy, http://www.dancingdots.com/cp/CPTest.EXE and allow it to update your copy protection executables. - Retry what you were having problems with. If you continue to experience problems please run CPTest.exe again and e-mail the results to Reinstalling Your Copy Protection To reinstall your copy protection: - Put your floppy into your computer. - Open My Computer and double-click your floppy drive. - Double-click winmove.exe on your floppy drive. - Check Move Authorization and uncheck Reset Authorization and click on - Enter A:\ as the source drive and C:\ as the destination drive and click Installing the Copy Protection under MS-DOS Mode Note that Windows ME will not boot into MS-DOS mode so you may need to look for a different solution. Here are our instructions for booting into MS-DOS mode: - Go to Start | Shutdown and select Restart in MS-DOS mode and click on Yes or OK. - Insert your GOODFEEL installation or authorization floppy. - Type A:[ENTER]. - If you have reset codes from Dancing Dots, type EVMOVE /R A:\[ENTER] and enter your reset codes when prompted. Wait for this to finish. - Type EVMOVE A:\ C:\ /b[ENTER] (note the back and forward slashes). Wait for this to finish. - Remove your floppy, type exit and try GOODFEEL. Your machine halts unexpectedly your machine halts unexpectedly during the installation process, click here for information from our copy protection vendor. Note that Windows Me can not boot into MS-DOS mode so you will need to deal with the 32-bit protected programs in Windows as mentioned in the link above. See the directions above if you want to install the copy protection under MS-DOS mode. Authorization not found. 7043-4302-4300 (when running GOODFEEL) Sometimes the copy protection must be installed in MS-DOS mode. See the directions above. All Other Copy Protection Problems Click here (http://www.az-tech.com/ev_faq.html) for all other problems and contact us if you still need help after visiting this site.
OPCFW_CODE
We don't actually supply the physical location of your Web Site on the Internet. Instead, we have contracts with other firms that specialize in that service. We then deal directly with them for you, on your behalf. So you still have a one-source Web presence provider (us), while allowing us to specialize in the business that we do best, and allowing other companies to do what they do best. Our preferred provider supplies you with the following capabilities. - Your own domain name (www.yourcompany.com) - Multiple, unlimited email ID's (firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, etc.) - Mailing Lists There are four types of speed and connectivity issues that a Web Host provider should be prepared to discuss with you. Although we've attempted to simplify these issues, they are still somewhat technical. The most important sentences are in bold, so you can just read the bold portion. Because we've attempted to simplify the information in this section (e.g., un-geekified it), it may not be 100% technically accurate. Instead, we've strived to convey the issues and the relevant points, and are ignoring the more nitpicky details. - Dial in capabilities: - Many Web Host providers are also ISP's and offer dial-in capabilities to the Internet. The disadvantage of this is that their Internet connections are frequently taken up with people surfing the web, rather than accessing your Web Site. ISP's tend to optimize their setup for Web surfers rather than Web Sites. We do not offer dial-in capabilities. - The Internal Network speeds/capabilities: - Our preferred provider has a "Switched Ethernet" internal network. Although many providers forget to consider this, the local network setup is a crucial element in determining overall Internet performance. A traditional 10baseT Ethernet (and even the newer 100baseT) becomes flooded with collisions as new servers and workstations are added. Our "Switched Ethernet" network gives each server maximum throughput to the outside world. - The connection from their offices to their backbone provider: - Our preferred provider has multiple T3/DS3 connections to the Internet backbone with a 10-16 MBs data rate. The multiple T3/DS3 connections allow for redundancy. In the event that one fails, all traffic can be immediately routed through the other. This ensures your site has maximum availability. - The connection from their backbone provider to the rest of the Internet: - All Web Host providers and all but a few of the very largest ISP's use one or more Internet Backbone providers to provide them access to the Internet. The two Internet Backbone providers we use supply your site with multiple DS3/T3 connections to the Internet. Each provider supplies a capacity of 10-16 MBps, scalable to 45 MBps. This again ensures redundancy and maximum availability of your Web Site to your customers. In summary, many Web Host Providers either - optimize their connections for Web Surfers rather than viewing your Web Site, - have an inadequately configured internal Network - do not have adequate redundancy to ensure maximum availability of your Web Site - claim to have multiple T1 connections to the Internet, when they really have a slow T1 connection to their Backbone provider who has multiple T1 or a T3 connection to the Internet.
OPCFW_CODE
RunTime Type Information or RunTime Type Identification, or just RTTI, is a useful feature in C++ language. As its name suggests, this facility gives you the ability to query type information at runtime. Those are the main tools to achieve RTTI. Some of you may frown upon this RTTI thing because it seems to have a bad reputation out there, and there is a good reason for that. Generally you want to stay away from this feature because static typing is much safer than dynamic typing thanks to the compiler, and it gives you runtime overheads as well. So why is it useful? Generally we use virtual inheritance and polymorphism because we only care about the interface but not the exact implementations. However, some derived classes may have specific method that doesn’t fit into the interface in any way. Although it is possible to add another virtual function to the base class and all its derived classes and let them provide empty implementations. But doesn’t that sounds awkward when you tell every Person class to program() when a Programmer class has to? It might be more reasonable to recover the true type (not exactly right with dynamic_cast<>, I’ll talk about it later) of a polymorphic type and call the class’s unique function as needed. dynamic_cast<> is your first choice to achieve this. Consider the following codes, As is quite clear in the codes and comments, you can use dynamic_cast<> to walk around the inheritance graph get to the specific class you need, as long as virtual inheritance is used. The name upcast, downcast, and crosscast come from the tradition to draw base class on top of the derived in an inheritance graph. Remember earlier I said that dynamic_cast<> isn’t really about recovering the true type of a polymorphic pointer. Let’s say we have another class down the inheritance graph, class FurtherDerived : public Derived Bjarne Stroustrup has made this quite clear in TCPL, From a design perspective, dynamic_cast can be seen as a mechanism for asking an object if it provides a given interface As long as the class provides the requested interface, dynamic_cast<> will work. In other words it works for all derived classes down the road. So how do we find out the exact type of an object? It’s typeid() to the rescue. typeid(*pb1) == typeid(FurtherDerived); // true The return type of std::type_info which has some other useful functions like hashing and name string. Keep in mind that dynamic_cast<> are designed for different tasks so you shouldn’t misuse both. You could almost always redesign the inheritance relationships and use virtual functions to do dynamic dispatching. But do know that you have the right tools at hands and don’t hesitate to use them when needed.
OPCFW_CODE
package robustReversible; import java.util.Arrays; import java.util.BitSet; public class DistributedStateManager implements CommunicationManager { public static final boolean USE_MONITOR = false; public static int MAX_N_PENDING_STATES = 5; private static byte MAGIC_HEADER = 107; private static byte HEADER_PLUS_FIXED_PAYLOAD = 4; protected static final int WAITTIME = 100; protected static final float WAITTIME_MS = WAITTIME/1000.0f; /** * Byte to integer conversion function * @param b * @return */ private static int b2i(byte b) { return (int)b&0xff; } /** * Package exchanged between nodes * @author ups */ private class EightMsg { // Bit for switching to new sequence boolean alternateSequenceFlag; // Global state int state; // Current pending states int[] pending = new int[MAX_N_PENDING_STATES]; // ID of module to execute state int recipientID; EightMsg(byte[] raw) { if(raw[0]!=MAGIC_HEADER) throw new Error("Incorrect packet"); alternateSequenceFlag = raw[1]!=0; state = b2i(raw[2]); recipientID = b2i(raw[3]); for(int i=0; i<MAX_N_PENDING_STATES; i++) pending[i] = b2i(raw[4+i]); } /** * @param alternateSequenceFlag * @param state * @param recipientID * @param pending */ EightMsg(boolean alternateSequenceFlag, int state, int recipientID, int[] pending) { this.alternateSequenceFlag = alternateSequenceFlag; this.state = state; this.recipientID = recipientID; this.pending = pending; } byte[] encode() { byte[] packet = new byte[HEADER_PLUS_FIXED_PAYLOAD+MAX_N_PENDING_STATES]; packet[0] = MAGIC_HEADER; packet[1] = alternateSequenceFlag ? (byte)1 : 0; packet[2] = (byte)state; packet[3] = (byte)recipientID; for(int i=0; i<MAX_N_PENDING_STATES; i++) packet[4+i] = (byte)pendingStates[i]; return packet; } } // Merge function implementation private static void intersect_plus_greater(int set0[], int set1[], int min, int dest[]) { int index0, index1, desti; desti = 0; for(index0=0; index0<MAX_N_PENDING_STATES; index0++) { if(set0[index0]==0) continue; if(set0[index0]>min) { dest[desti++] = set0[index0]; continue; } for(index1=0; index1<MAX_N_PENDING_STATES; index1++) if(set0[index0]==set1[index1]) { dest[desti++] = set0[index0]; break; } } for(index0=desti; index0<MAX_N_PENDING_STATES; index0++) dest[index0] = 0; } private static void intersect(int set0[], int set1[], int dest[]) { intersect_plus_greater(set0,set1,255,dest); } /* local state (s0,p0), incoming state (s1,p1), resulting state (return,p2) */ /* correctness: see merge.c (tests equivalent implementation) */ private static int merge(int s0, int p0[], int s1, int p1[], int p2[]) { if(s0>s1) { intersect_plus_greater(p0,p1,s1,p2); return s0; } else if(s0==s1) { intersect(p0,p1,p2); return s0; } else { /* s0<s1 */ intersect_plus_greater(p1,p0,s0,p2); return s1; } } // Local state of distributed communication private CommunicationProvider provider; private int myID; private int localState; private boolean activated = false; private static final int INIT_WAITTIME_MS = 0; private int[] pendingStates = new int[MAX_N_PENDING_STATES]; private BitSet pendingHandled = new BitSet(); private int globalState; private int recipientID; private boolean startingModule; private boolean alternateSequenceFlag; private float lastTime; private boolean limitPendingOneWay; private boolean firstInit = true; public void senderAct() { float time = provider.getTime(); if(lastTime+WAITTIME_MS>time) return; lastTime = time; EightMsg msg; synchronized(DistributedStateManager.this) { msg = new EightMsg(alternateSequenceFlag, globalState, recipientID, pendingStates); } provider.broadcastMessage(msg.encode()); } public synchronized int getMyNewState() { int state = localState; localState = 255; return state; } public synchronized boolean pendingStatesPresent() { int i; for(i=0; i<MAX_N_PENDING_STATES; i++) { if(pendingStates[i] != 0) { //System.out.println("[Pending states are present]"); return true; } } return false; } public synchronized void sendState(int state, int recpID) { if(state>globalState) { globalState = state; recipientID = recpID; } } public void reset_sequence() { throw new Error("Sequence reset not implemented"); } public synchronized void reset_state() { int i; startingModule = true; /* reset the vars to their init values */ globalState = 0; localState = 255; recipientID = 0; for(i=0; i<MAX_N_PENDING_STATES; i++) pendingStates[i] = 0; pendingHandled = new BitSet(); /* flip the flag (can't do that with ~alternateSequenceFlag, beware!) */ /*if(alternateSequenceFlag) alternateSequenceFlag = false; else alternateSequenceFlag = true;*/ this.notifyAll(); if(USE_MONITOR) update(); } public void init(int myID, int firstModuleID) { this.myID = myID; //if(firstInit) { //firstInit = false; if(myID==firstModuleID) { provider.delay(INIT_WAITTIME_MS); if(globalState==0) localState = 0; } // } if(USE_MONITOR) update(); } public int getGlobalState() { return globalState; } private static final int[] emptyComparator = new int[MAX_N_PENDING_STATES]; public synchronized void receive(byte[] rawMessage) { EightMsg msg = new EightMsg(rawMessage); /* this should not apply to the module starting the sequence*/ if(alternateSequenceFlag != msg.alternateSequenceFlag && (!startingModule)) { /* reset the vars to their init values */ globalState = 0; localState = 255; recipientID = 0; for(int i=0; i<MAX_N_PENDING_STATES; i++) pendingStates[i] = 0; alternateSequenceFlag = msg.alternateSequenceFlag; } /* if we are the starting module we should disregard msgs coming from the previous run!*/ else if(alternateSequenceFlag != msg.alternateSequenceFlag && (startingModule)) return; int previousState = globalState; if(limitPendingOneWay && msg.state<previousState) return; int pendingBuffer[] = new int[MAX_N_PENDING_STATES]; globalState = merge(globalState, pendingStates, msg.state, msg.pending, pendingBuffer); int newPending[] = findNew(pendingStates,pendingBuffer); // if(!Arrays.equals(newPending, emptyComparator) && globalState>previousState) { // System.out.println("New pending states module "+myID+": "+Arrays.toString(newPending)+" incoming global state "+globalState+" local was "+previousState); // } if(msg.state<previousState && !Arrays.equals(pendingStates, pendingBuffer)) System.out.println("*** Out-of-order merge"); int[] tmp = pendingStates; pendingStates = pendingBuffer; pendingBuffer = tmp; /* swap */ if(globalState>previousState) { recipientID = msg.recipientID; if( msg.recipientID == myID ) { System.out.println("Module "+myID+" updated local state to "+globalState); localState = globalState; } } else if(responsibleForPendingState(newPending)) { int pendingState = findResponsiblePendingState(newPending); if(!pendingHandled.get(pendingState)) { System.out.println("*** "+myID+" setting local state to "+pendingState); localState = pendingState; } } if(USE_MONITOR) update(); } private int findResponsiblePendingState(int[] newPending) { for(int i=0; i<newPending.length; i++) if(provider.isResponsible(myID, newPending[i])) return newPending[i]; throw new Error("No responsible pending state found"); } private boolean responsibleForPendingState(int[] newPending) { for(int i=0; i<newPending.length; i++) if(newPending[i]!=0 && provider.isResponsible(myID,newPending[i])) return true; return false; } private int[] findNew(int[] oldStates, int[] newStates) { int[] result = new int[MAX_N_PENDING_STATES]; int index = 0; for(int i=0; i<newStates.length; i++) { int j=0; for(; j<oldStates.length; j++) if(newStates[i]!=0 && newStates[i]==oldStates[j]) break; if(j<oldStates.length) result[index++]=oldStates[j]; } return result; } public void setCommunicationProvider(CommunicationProvider provider) { this.provider = provider; } public void addPendingState(int state) { pendingHandled.set(state); for(int i=0; i<MAX_N_PENDING_STATES; i++) { if(pendingStates[i] == state) return; if(pendingStates[i] == 0) { pendingStates[i] = state; break; } } } public void removePendingState(int state) { for(int i=0; i< MAX_N_PENDING_STATES; i++) { if(pendingStates[i] == state) { pendingStates[i] = 0; break; } } } public String dump() { StringBuffer res = new StringBuffer(this.globalState+" [ "); for(int i=0; i<MAX_N_PENDING_STATES; i++) res.append(pendingStates[i]+" "); res.append("]"); return res.toString(); } public void setLimitPendingOneWay(boolean b) { this.limitPendingOneWay = b; } public float getTime() { return provider.getTime(); } public void update() { MonitorWindow.update(myID,globalState,pendingStates); } }
STACK_EDU
TSP50c1x MB games test modes found so far (All games programmed by Michael Gray, I believe; I need to verify what games he programmed at some point, he has an account on BoardGameGeek): The test modes are all accesed by holding down a key and pressing the 'on' button: Electronic Talking Battleship(1989): Key Test: Hold the Player 1 A1 key (or Player 2 J10 key which maps to same matrix position), then press the green 'On' button to enter key test mode. When in key test mode, you are expected to press player 1 keys A1, B2, C3, D4, E5, F6, G7, H8, I9, J10 in that order, the system will beep after each key is pressed. Then press Player 1 Fire, the system will say 'two'. Now press Player 2 A1, B2, C3, D4, E5, F6, G7, H8, I9, J10 in that order, the system will beep after each key is tested. Finally, press Player 2 Fire, the game will say 'Battleship' and play the explosion sound while flashing the ship explosion LED, then turn off. Sound Test: Hold the Player 1 B2 key (or Player 2 I9 key which maps to same matrix position), then press the green 'On' button to enter sound test mode. The game will play the beep, all of the speech samples, the firing arc and music jingles in order, followed by the explosion sound and the flashing ship explosion LED, and then turn off. Omega Virus(1992): Key test: Hold the '0' key, then press the 'On' button to enter key test mode. The game will say 'zero', 'one', 'two' when pressing the 3 number keys, and will say <monotone>'we are running out of time' when pressing the 'Repeat' key, and then turn off. Sound test: Hold the '1' key, the press the 'On' button to enter sound test mode. The game will play the explosion sound followed by all of the speech sounds and sound effects, followed by the glitch samples, and then turn off. I am assuming the international (german, spanish, italian, french, etc) versions of these games have the same test modes. (I also assume the test modes are the same on the 1998 ?re-release? of electronic talking battleship. It is quite unclear what changed between the 1989 and 1998 releases, but they use different mask tsp50cxx chips, so there must be some changes. Would be worth checking if the C3 key plus 'On' activates some new test mode that didn't exist in the older MCU) Given the way these test modes work, I suspect dream phone and other games of this era probably have a similar set of tests if you hold the '0' or '1' keys on the phone or other entry pads while turning on the game. Also note Omega Virus has a hidden game mode: After you press 'On', and the game asks you to enter skill, press 'repeat' 3 times and the virus will laugh once then taunt the players. This enables an extra difficult mode where the virus will start to move between rooms once any player gathers all 3 of the items needed to kill it! This makes the endgame MUCH harder! P.S. If anyone has one of the international versions of these games, please open them up and send a picture or text of what is written on the tsp50cxx 16-pin mcu! The CSMxxxxx number will be different from the english ones. I'm curious if there is a British/UK voice version of some of these games as well. Last edited by Lord Nightmare; 05/10/1507:05 PM. Reason: add PS about intl versions "When life gives you zombies... *CHA-CHIK!* ...you make zombie-ade!"
OPCFW_CODE
Oct 8, 2007 If you don't find programming algorithms interesting, this post is not for you. Reservoir Sampling is an algorithm for sampling elements from a stream of data. Imagine you are given a really large stream of data elements, for example: Your goal is to efficiently return a random sample of 1,000 elements evenly distributed from the original stream. How would you do it? The right answer is generating random integers between N - 1, then retrieving the elements at those indices and you have your answer. If you need to be generate unique elements, then just throw away indices you've already generated. So, let me make the problem harder. You don't know N (the size of the stream) in advance and you can't index directly into it. You can count it, but that requires making 2 passes of the data. You can do better. There are some heuristics you might try: for example to guess the length and hope to undershoot. It will either not work in one pass or will not be evenly distributed. A relatively easy and correct solution is to assign a random number to every element as you see it in the stream, and then always keep the top 1,000 numbered elements at all times. This is similar to how a SQL Query with ORDER BY RAND() works. This strategy works well, and only requires additionally storing the randomly generated number for each element. Another, more complex option is reservoir sampling. First, you want to make a reservoir (array) of 1,000 elements and fill it with the first 1,000 elements in your stream. That way if you have exactly 1,000 elements, the algorithm works. This is the base case. Next, you want to process the How can you do this? Start with I've shown that this produces a So, the probability that the 2nd element survives this round is: This is the probability we want. This can be extended for the It is pretty easy to prove that this works for all values of This is the probability that a given element will be removed in that round given that it has made it to the reservoir so far. The probabilty that it isn't removed is the inverse. The final probability that it survives to the This is the probability we want. Now take the same problem above but add an extra challenge: How would you sample from a weighted distribution where each element has a given weight associated with it in the stream? This is sorta tricky. Pavlos S. Efraimidis figured out the solution in 2005 in a paper titled Weighted Random Sampling with a Reservoir. It works similarly to the assigning a random number solution above. As you process the stream, assign each item a "key". For each item in the stream Now, simply keep the top To see how this works, lets start with non-weighted elements (ie: weight = 1). Now, how does it work with weights? The probability of choosing This is the problem that got me researching the weighted sample above. In both of the above algorithms, I can process the stream in O(N) time where N is length of the stream, in other words: in a single pass. If I want to break break up the problem on say 10 machines and solve it close to 10 times faster, how can I do that? The answer is to have each of the 10 machines take roughly 1/10th of the input to process and generate their own reservoir sample from their subset of the data using the weighted variation above. Then, a final process must take the 10 output reservoirs and merge them. The trick is that the final process must use the original "key" weights computed in the first pass. For example, If one of your 10 machines processed only 10 items in a size-10 sample, and the other 10 machines each processed 1 million items, you would expect that the one machine with 10 items would likely have smaller keys and hence be less likely to be selected in the final output. If you recompute keys in the final process, then all of the input items would be treated equally when they shouldn't.
OPCFW_CODE
using System.Collections; using System.Collections.Generic; using UnityEngine; using System; namespace ProceduralStructures { [Serializable] public class CityDefinition { [Tooltip("This is going to be the parent of all generated houses.")] public GameObject parent; [Tooltip("This terrain is used to choose the height value")] public Terrain terrain; [Tooltip("Prefab of house, is picked from randomly")] public List<GameObject> houses; [Tooltip("Prefab of house, is picked from randomly")] public List<HouseDefinition> houseDefinitions; [HideInInspector] public List<HousePlaceholder> housePlaceholders; [Tooltip("Transforms used as a path to draw the streets along")] public List<Street> streets; [Tooltip("Random number generator will be initialized with this value")] public int seed; [Tooltip("General ground offset of all houses, set this if you have z-fighting issues")] public float yOffset = 0; public RoadPainting roadPainting; [Serializable] public class Street { [Tooltip("Name of the street for address labels, parent of waypoints will be used if available")] public string name; [Tooltip("The (estimated) length of the street will be calculated")] public float length; [Tooltip("Distance from the middle of the street to the front of the house")] public float doorToStreet = 3.5f; [Tooltip("Distance between houses on he same side of the street")] public float houseToHouse = 0.8f; [Tooltip("Inhibit houses on the left side of the street")] public bool abandonLeft = false; [Tooltip("Inhibit houses on the right side of the street")] public bool abandonRight = false; [HideInInspector] public List<Tangent> tangents; [HideInInspector] public List<Vector3> points; [Tooltip("Use a bezier spline instead of straight lines between waypoints")] public bool smoothCurve; [Tooltip("Use the below node as a parent of waypoints")] public bool useChildNodes; [Tooltip("Use all child nodes of this transform as waypoints")] public GameObject transformsParent; [Tooltip("Define single waypoints instead of the parent above")] public List<Transform> transforms; } [Serializable] public class RoadPainting { public bool enabled; [Tooltip("Terrain to paint road on")] public Terrain terrain; [Tooltip("Index to the terrain layer with the road texture, starts at 0")] public int layerIndex; [Tooltip("max. strength to be used for road texture")] [Range(0,1)] public float maxAlpha = 1f; [Tooltip("Brush radius in pixels")] public int paintRadius = 2; } [Serializable] public class HousePlaceholder { public HouseDefinition houseDefinition; public GameObject prefab; public HousePlaceholder(HouseDefinition houseDefinition, GameObject g) { this.houseDefinition = houseDefinition; this.prefab = g; } } } }
STACK_EDU
I the last tip TIP#88 we saw how to encrypt a password. Now in this tip I would like to share how to check encrypted password ? Means once you stored your encrypted password in database now next step is to compare that particular password with your input password and return results accordingly. The Syntax of the PWDCOMPARE is very simple PWDCOMPARE(‘Password plain text’, ‘Password encrypted form’) This function return 1 if plain text and hash value are matched else return o. Lets suppose we have created a table with 3 columns like userId, username and password as shown below DECLARE @tblLogin AS TABLE (UserId INT IDENTITY, Now suppose we have inserted 2 rows in to it wit encrypted password INSERT INTO @tblLogin VALUES (‘Indiandotnet’,PWDENCRYPT(N’MyPassword’)) INSERT INTO @tblLogin VALUES (‘SQLRaaga’, PWDENCRYPT(N’Test’)) Now, Suppose we have want to write a query which return rows from @tbllogin whose password is Test then it should return SQL Raaga for this I have to write following query SELECT * FROM @tblLogin WHERE PWDCOMPARE(N’Test’,EncryptedPassword) = 1 For detail take a look of below snap I hope you understand with above provided example. Security is always a concern for every database developer. How to secure valuable information is one of the major and important aspect. The first approach toward security to have a strong username & password and the next step is to have password in encrypted form. Now this article will help you to encrypt your password in hash. Isn’t it interesting ? So SQL Server provided a function by using that particular simple function we can encrypt a password from plain text to hash. The valuable function is PWDENCRYPT. By the name it is clear that it will crease the password. The syntax is very simple PWDENCRYPT(N’String which you want to encrypt’) see below snap for more detail. I hope this tip help you to secure your password. Determine the table dependencies is challenging sometime but we can easily resolve this by using a simple stored procedure which SQL Server provides. By using this stored procedure we can easily determine all the dependencies of particular table. The stored procedure is sp_msdependencies We can use this stored procedure as shown below Execute sp_msdependencies ‘tableName’ For example I am using Adventureworks2012 and I want to know the dependencies of product table then I have to write following command EXEC sp_msdependencies ‘[Production].[Product]’ When I run this command I get result as shown in below figure I hope this tip may help you somewhere. Thanks for reading. It might be already known to you but I thought for sharing because I frequently use this command and it is very useful command. When someone wants to determine detail of a function or stored procedure he/she can use this useful command. The syntax is very simple. Just write sp_helptext Storedprocedure/ functionname For example If I want to determine detail of a stored procedure “proc_FindStudentUsingCorrect” then I have to write following command see below snap for detail You can now copy the result text and check what exactly written in the stored procedure or function. I hope you will use it in your day to day practice. Why the SQL Server is running slow ? What are the processes running currently on SQL SERVER instance ? Many other like the above which might help us to understand our current SQL Server instance health can be answered by a simple command “sp_who2”. “sp_Who2” is an Undocumented command. You can utilize this command to check current status of your SQL SERVER. We can run this stored procedure directly in Management studio. See below snap for detail if you see above snap you will find the sp_who2 providing login detail, database name, command action which is currently applied , CPU time, DiskIO etc. You can easily find which spId consuming highest CPU,DISKIO etc. I hope this stored procedure might help you. One of the good sentence I remember “When someone has teeth he/she is not having nuts and when someone has nuts he/she not having teeth”. Just joke a part. You understand what I mean to say here. If you have the resources then utilize it. One of the most most important aspect in performance is Memory. The main point here is if we have high configured machine which have more than 16 GB RAM but the pain point is our SQL Server is not using available Memory. Now to configure memory for SQL SERVER is supper easy but on the same time you need to understand how much memory you can assign to SQL SERVER to use because you need some buffer memory to your operating system and other Now just follow below steps to configure the memory for SQL SERVER Step 1:- Right click the server and open SQL SERVER Properties Step 2:- Now select the memory tab and you will find the below screen Step 3:- You can change the above boxed max memory value according to your available memory calculation. OR you can run following command as well to set the Maximum memory that SQL SERVER can utilize sp_configure ‘show advanced options’, 1; sp_configure ‘max server memory’, 4096; — 4GB One more important point I would like to share if your machine is X86 machine in such case you have to use /3GB switch with boot file first. You can find the instruction to set /3GB switch with following link https://technet.microsoft.com/en-us/library/bb124810%28v=exchg.65%29.aspx I hope this article might help you. Sometimes it might be possible that you have to run dos command from SQL SERVER. In such situation you have to enable the xp_cmdShell option of sql configuration. To enable this we can write following statement EXEC sp_configure ‘show advanced options’, 1; EXEC sp_configure ‘xp_cmdshell’, 1; Just wanted to share that it can be a security thread as well. So enabling the xp_cmdshell option might be sometime dangerous if we did not handle the SQL injections. Now to disable this we can write following commands EXEC sp_configure ‘show advanced options’, 1; EXEC sp_configure ‘xp_cmdshell’, 0; We can enable / disable this by following steps as well (Below steps will work with SQL SERVER 2008 or higher version ) Step 1: Select facets option by right clicking SQL database as shown below Step 2:- When you go with option one you will get a new screen. In that new screen select facet “Surface Area Configuration” Step 3:- Now all the advance option will be available of Surface area Configuration as shown in below figure you can enable disable Xp_CmdShell.
OPCFW_CODE
Since I’m interested in numerous range of topics and I get asked a lot, how do I become knowledgeable on most of them. The last time I spoke about this in a public conversation was at Ingenuos — Alberto Tello’s Podcast (es) — where we touched this area, and how I designed/hacked a process to be able to learn a lot in a short amount of time. Yet we didn’t had much time to speak about the details because the conversation was oriented towards Social Innovation. Before we go into details of learning hacks there are two things that need to be clarified: - Knowledgeable vs Experts: While you can become quite knowledgeable on any topic, you won’t become an expert in a month. That takes real effort and dedication. Yet, this will get you well above the average person. - Knowledge vs Experience: You can become quite knowledgeable on any given topic, but you won’t become a real expert until you put that knowledge into practice. You can read and write an encyclopedia on swimming, but it will never replace a good day at the beach That being said, let’s get into it. 0. Intrinsic vs Extrinsic Motivations While this might seem obvious and unnecessary to list, you might be surprised how much this can affect in the long-term your ability to effectively learn anything. Make sure you want to learn something new for the good reasons. For example, if you want to learn how to raise llamas because you met a girl on the bar that is really into llamas, you will probably won’t get very far and end up blowing it by saying something that sounds stupid. That is just an extrinsic motivator, and those wear out pretty fast. By contrast, if you happen to be captivated by a delicious pie you had in the weekend and find yourself watching a lot of videos on how to make pies, you have a great intrinsic motivator to start learning something new which enjoyment will last a lifetime. The only problem with fixed knowledge such as books and online courses is that they can become obsolete in relatively short amount of time. That is why I keep tabs on sites that constantly generate new content and knowledge on the topic I’m Learn-Hacking. I highly recommend Feedly, which has become the default rss feed manager. You can either bring your own blogs or use it’s quite handy blog suggestion feature that will help you discover new sources. Sometimes I get so much into a topic that I create my own tools to discover and manipulate sources. For example, I wanted to keep track on specific topics but feedly did not’ provide a practical interface nor sources, so I created my own aggregator that could present information I wanted to learn clean and in a single view. (and then released it for generic topics). I also started to follow several youtube channels, but I hated how I had to lose a lot of time changing channels — So I created a tool that would display all the videos that I would watch in a single page. I did things on the web, because the information I’m looking for is there. You can choose any media to manage the information. The main idea is to make the process as efficient as possible. For example, you want to learn to cook the best steak in town, you can create a private newsletter and invite chefs and cooks to share 1 recipe a month, so you can all share techniques without much effort. 2. Buy top 5 Books on Amazon / Courses The first and quickest way to get started is acquiring knowledge that has already been prepared presented as a product. This means going to Amazon.com and buy the top 5 books on the topic. There are two ways you can choose your top 5. Bestsellers and Top Rated. If you choose the latter, remember to ignore reviews below 4, they are usually made by people who rate low everything. Just make sure you like the positive reviews. Then either summarize or rewrite everything you learned from them. Using Highlighting feature can help a lot if you want to share notes, although I still recommend writing a few notes by hand. You will amazed how much you can retain by writing physical notes. Sometimes, the knowledge you are looking for is a bit technical or someone has already been packed in a better presentation. For example, I love skillshare and General Assembly. As part as my “Responsive Web Design Learning Hack” I purchased Meng To. Online book/course. 3. Talk to experts So, now that you have actual knowledge and your own opinion on a specific topic, find some experts to share insights with. These experts can be the same people you have followed online, or someone you admire. You would be amazed how many people that seem hard to reach are quite accessible — if you approach with something smart to say. If you have no direct way introduction to them, you can always try to send a old school snail-mail letter, even better if it’s handwritten, since they have about 80% open rate than email. 4. Do Stuff Now things get to start interesting, you need to take what you have learned and get real. Build something, put things into practice if you haven’t, make prototypes and break stuff. Remember, all the knowledge in the world will never be a substitute for real world experience. After reading 3 books and 2 online courses on responsive web-design I started to build new themes from scratch for fun. I skipped the talk to experts phase for this one, since I already speak with designers on a daily basis. Although I did share my designs with them to receive feedback. 5. Iterate until you are satisfied You probably won’t get the level or results that you want on the first try. That is not only normal and ok, but it’s great. Mistakes, while you are learning are a great thing, they are more of iterations and variations of the knowledge you are acquiring. Think of them as small experiments that allow you to explore variations and let you develop your skills. Keep practicing until you get the results you want, and when you get there, you will find that there are almost infinite amount of new exploration lines you can keep on learning for any given topic. Choose any one of those you like, or if you are like me. You will get into a level that you become better than the average, you don’t really want to become the best of class for any amount of valid reasons. For example, if you are a manager, you don’t want to become the best developer, but knowing the skills will allow you to be a better manager, communicate better and grow your team. What you learn and the amount of specialization you develop is up to you. Build your Own Tools I created my news aggregators and parsers after learning responsive web design. Although I could have used and match several existing tools to achieve similar results. It doesn’t matter if you build them from scratch or piggyback on existing solutions, always evaluate the possibility of using existing tools to make your learning process more efficient. Outsource tedious activities Sometimes I need to learn stuff that requires repetitive and tedious tasks, in that case I usually rely on virtual assistants which can take care of them so I can focus on using my time on more valuable things, or even doing nothing — enjoying my time. For example, you can a virtual assistant to do the research on which books and courses are the best and buy them for you. Or have them identify the experts on any given field. Remember Learn Hacking is all about of being able to acquire knowledge and develop skills in the most efficient way possible. Get creative!
OPCFW_CODE
Invoke-Parallel and PowerCLI Have you had any luck running this with PowerCLI? I'm hitting a brick wall doing so and was hoping for some suggestions on what to try. In my script block I am adding the PowerCLI snapin so that it's functions can be used but I keep getting the following: Get-RunspaceData : An item with the same key has already been added. At C:\capacity-planning\windows\Invoke-Parallel\Invoke-Parallel\Invoke-Parallel.ps1:568 char:21 + Get-RunspaceData + ~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Write-Error], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Get-RunspaceData If I try running with the ImportModules parameter I get this: Get-RunspaceData : You have modified the global:DefaultVIServer and global:DefaultVIServers system variables. This is not allowed. Please reset them to $null and reconnect to the vSphere server. At C:\capacity-planning\windows\Invoke-Parallel\Invoke-Parallel\Invoke-Parallel.ps1:568 char:21 + Get-RunspaceData + ~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Write-Error], InvalidState + FullyQualifiedErrorId : VMware.VimAutomation.ViCore.Types.V1.ErrorHandling.InvalidState,Get-RunspaceData It is all very confusing to me. This post seems to make it sound easy if I were to not use this module but I'd prefer to stick with this plumbing if at all possible: http://velemental.com/2012/03/11/multithreading-powercli/ Haven't tried this, but do have some internal use cases for it, will take a look tomorrow! Just checking to see if you ever got around to testing this out. I am having the same errors as awiddersheim mentions. Hi @jrob24! No luck so far. Tagged this 'help wanted' as it's a bit low priority on my side, but will keep poking around when the need comes up. Most of my operations with PowerCLI prioritize functionality over speed - partially due to the lumbering nature of PowerCLI : ) If you have any ideas or get anything working, definitely let us know! I finally found a workaround to this problem. runspaces are not thread-safe so you have to run them out of process. Here's my solution in C#, you should be able to reference this to make modifications to your powershell script to create a runspace out of processes. See line: RunspaceFactory.CreateOutOfProcessRunspace() public async Task TestPowercli(string name, string vcenterHost) { if (string.IsNullOrWhiteSpace(name)) { return; } if (string.IsNullOrWhiteSpace(vcenterHost)) { return; } instanceName = name.Trim(); int timeoutMins = 5; string script = @"c:\temp\testpowercli\testpowercli.ps1 " + instanceName + " " + vcenterHost + " -verbose"; PowerShellProcessInstance instance = new PowerShellProcessInstance(new Version(5, 0), null, null, false); using (Runspace runspace = RunspaceFactory.CreateOutOfProcessRunspace(new TypeTable(new string[0]), instance)) { PowerShell ps = PowerShell.Create(); runspace.Open(); ps.Runspace = runspace; ps.AddScript(script); outputCollection.DataAdded += outputCollection_DataAddedPowercli; // the streams (Error, Debug, Progress, etc) are available on the PowerShell instance. // we can review them during or after execution. // we can also be notified when a new item is written to the stream (like this): ps.Streams.Error.DataAdded += Error_DataAddedPowercli; ps.Streams.Verbose.DataAdded += Verbose_DataAddedPowercli; IAsyncResult result = ps.BeginInvoke<PSObject, PSObject>(null, outputCollection); DateTime start = DateTime.Now; while (result.IsCompleted == false) { if ((DateTime.Now - start).TotalMinutes > timeoutMins) { Clients.All.getPowercliMessage(instanceName, "ERROR: Time out exceeded after " + timeoutMins + " minutes"); break; } await Task.Delay(1000); } } Clients.All.getPowercliMessage(instanceName, "Done"); } I'm also running into this problem. Any further development without using C#?
GITHUB_ARCHIVE
Having two IPs for same hostname -- ping is sticking with the old IP and not trying the new IP I thought there is a 1:n connection between IP and hostnames. However what I get is this [me@neo ~]$ nslookup dozer Server: <IP_ADDRESS> Address: <IP_ADDRESS>#53 Name: dozer.fritz.box Address: <IP_ADDRESS> Name: dozer.fritz.box Address: <IP_ADDRESS> And here with getent [me@neo ~]$ getent hosts dozer <IP_ADDRESS> dozer.fritz.box <IP_ADDRESS> dozer.fritz.box That seems to be ok, however ping is not continuing with the correct IP [me@neo]$ ping dozer PING dozer (<IP_ADDRESS>) 56(84) bytes of data. From neo.fritz.box (<IP_ADDRESS>) icmp_seq=1 Destination Host Unreachable From neo.fritz.box (<IP_ADDRESS>) icmp_seq=2 Destination Host Unreachable From neo.fritz.box (<IP_ADDRESS>) icmp_seq=3 Destination Host Unreachable From neo.fritz.box (<IP_ADDRESS>) icmp_seq=4 Destination Host Unreachable From neo.fritz.box (<IP_ADDRESS>) icmp_seq=5 Destination Host Unreachable From neo.fritz.box (<IP_ADDRESS>) icmp_seq=6 Destination Host Unreachable ^C --- dozer ping statistics --- 7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6220ms pipe 4 UPDATE 1 I had a laptop (dozer)connect, first through ETHERNET (.81) then through WIFI (.32). Finally disconnected ETHERNET (.81). For some reason my fritzbox (DNS) keeps the ETHERNET (.81). I can reboot my fritzbox (DNS) and hope its gone. However I am wondering if this is ok state and thus a client issue or server (fritzbox) mistake? UPDATE 2 If this is a client issue - How do I convince ping to take the second DNS entry? but ping on hostname is blocking on the first non-existing IP If this is a site on your local network, then you may want to change some settings here. Are you in control of the network yourself? @TRiG: just updated. please reconsider the "-1" Common practice, can be seen as the "poor man load-balancing". See Round-robin DNS yes, ok. but then ping should take just any working lookup. Instaed ping is sticking with the non-working lookup and does not continue to the next (at least in my case). I am wondering if there is some magic that needs to be enabled in my system. Why assume it's my downvote? It isn't. @TRIG sorry then :) Not really sure, but a few suggestions such as untick 'register this connection in DNS' on the Ethernet connection settings. Try a ipconfig /flushdns ? Set a shorter default TTL? set scavenging to be more agressive? (if you can) A site can be hosted on multiple servers. If DNS returns multiple A or AAAA records, a client is free to connect to any one of them. How this is managed is up to the site; ideally, all will return the same content. This may be used to have multiple servers around the globe, so that clients connect to the one closest. It can also be done with anycast IP addresses, where there is just one IP address, but it routes to multiple different physical computers, depending on where you are in the world, but this is trickier to set up. I am working in a local network. No balancing needed. If you're in control of the DNS, can you simply remove invalid entries? sure, there is a manual way like reboot or deleting. still i have the feeling that something is wrong with the system or DNS. @tswaehen there is nothing really wrong, you just have a bad entry. If it was made by your dhcp server it will probably clear itself eventually but if your impatient you can probably clear it manually by rebooting the router looking at the answers it sounds like bad-luck :) however wouldnt that be cool if ping or any other program just takes the next valid lookup and gives me the correct answer?
STACK_EXCHANGE
Will a remastered debian for powerpc still be in the powerpc architecture? I would like to know because I am going to use a power pc mac g5 for making my distribution. I also need to know if it makes my remaster the powerpc architecture and how to change it to 32 or 64-bit if possible. Strictly speaking, if you're creating your own distribution based on Debian, the powerpc architecture can be whatever you want it to be; in Debian it's "only" a name in dpkg's tables. If you do change your definition of an architecture though, you'd be better off changing the architecture name; for a good example of what happens when you don't, see Raspbian (where armhf isn't the same as Debian's armhf). powerpc in Debian is 32-bit; there's a 64-bit port, ppc64 (and a 64-bit little-endian port, ppc64el, but that's not relevant for a G5). If you want to mix both you'd use multiarch typically. So the answer to your question depends on what changes you intend to make in your "remastering"... So how do I use Multiarch? Is that the best method? You'll need to explain what you're trying to do in more detail if you want to know what the best method is... I want to make a distro based on debian on a Powerpc and intend to redistribute it with linux live kit but I also want it to be compatible with other computers. You want a single live system which can work on multiple architectures? I actually think that would work better than having one or more archetectures so yes. You realise that's not possible in the general case? You can use i386 on a 32- or 64-bit x86 PC, powerpc on a 32- or 64-bit PowerPC, but a powerpc live system won't work on a x86 PC... Even if you wanted to mix architectures using multiarch, it wouldn't work: multiarch supports co-installable libraries, not co-installable binaries. So I can't use a Powerpc to make my distro? Will using a i386 for the archetecture work insteard? I know that is 32-bit willb I n stalling debian 32-bit work? I don't know what to do then So if I can't use Multiarch what do I do? I can't, all I have is my phone at the moment and the site won't let me log in for some reason... I will try looking into this further
STACK_EXCHANGE
"On Startup -> Open the New Tab page" not working; it is doing the "Continue where you left off" behavior On two separate PC's now (current Chrome version I'm using is v39.0.2171.95 m) the "Open the New Tab page" functionality is not working. If I close Chrome with, for example, 3 tabs open, when I reopen Chrome all three of those tabs are still open. I've seen this before and am unsure of the fix. Any ideas? Go to Settings > On Startup and ensure that "Continue where you left off" is NOT selected. You then have to go into your taskbar, right click on "Google Chrome" and exit. Only then will Chrome pick up the new settings. It should work when you reopen Chrome. It just happened to me a few days ago, so I know what you're talking about. This is an issue with Chrome auto-updating itself and rewriting the settings internally despite the fact that your settings page shows whatever it shows now. Toggle it to "continue where you left off" and back to "open the new tab page", then save and exit from taskbar. In my case I toggled the settings and went into the task manager and ended the Chrome process to get it to work. I had same issue in all computer I work on now and @test answer was what did. I had to toggle between "continue where you left off" and back to "open new tab page", in Settings, and then kill the chrome processes through task manager. Then it worked. Go to Settings -> advanced -> system and uncheck the setting: "Continue running background apps when Google Chrome is closed" That's an important answer. I toggling between "continue where you left off" and back to "open the new tab page" and then closing Chrome was not working. Maybe like this it does. I managed to toggle through killing Chrome with task manager. Make sure no other Chrome windows are open. In my case I was using the Todoist extension which includes a "pop out window". This window appears on the taskbar as a Todoist window but it's actually a Chrome window. Hence when Chrome appears to be closed it's not actually closed, so the tabs don't close and they reappear when Chrome is reopened. For any macOS users that land here, I think toggling the "On startup" setting to "Continue where you left off", completely quitting, toggling it back to your desired "Open the New Tab page", and completely quitting again should do the trick. I went off what's been mentioned with Chrome possibly having a disconnect between the shown and internal settings ("Open the New Tab page" was already set when I first looked), and needing to be quit from the Windows task bar to pick them up. Not sure how I would confirm unless it breaks again but I think toggling the option and quitting is key. I've definitely quit Chrome, reset my Mac, and upgraded Chrome all while it's been doing the "Continue where you left off" behavior, and still "Open the New Tab page" was what it showed me when I looked at the setting. Toggling it without quitting had no effect, it still kept doing the "Continue where you left off" behavior.
STACK_EXCHANGE
missing icons from built foam foam was built like this: node --harmony ../../bower_components/foam/tools/foam.js --classpath=../../js/app/model/foamModels.js foam.build.BuildApp targetPath=. controller=com.employtouch.webclock.WebClockApp includeFoamCSS=true precompileTemplates=true CLASS({ package: 'com.employtouch.webclock', name: 'WebClockApp', extends: 'foam.browser.u2.BrowserController', requires: [ 'com.employtouch.webclock.Notification', 'foam.dao.EasyDAO' ], exports: [ 'notificationDao' ], properties: [ { name: 'notificationDao', factory: function() { return this.EasyDAO.create({ model: this.Notification, daoType: 'IDB', name: 'notification', cache: true, cloning: true, contextualize: true, seqNo: false }); } }, { name: 'model', factory: function() { return this.Notification; } } ], }); only modification was on the foam.js file itself as I didn't know how to 'require' other paths: c:\dev\git\TouchBaseHost\src\main\webapp\webclock-resources\bower_components\foam>git diff tools/foam.js diff --git a/tools/foam.js b/tools/foam.js index b7b54c3..1d34b32 100644 --- a/tools/foam.js +++ b/tools/foam.js @@ -73,6 +73,7 @@ console.dir(FOAMargs); foamPath = args.foamPath + '/core/bootFOAMnode.js'; } require(foamPath); + require('../../../js/app/model/foamModels.js'); if (CLASSPATH) { for (var i = 0; i < CLASSPATH.length; i++) { c:\dev\git\TouchBaseHost\src\main\webapp\webclock-resources\bower_components\foam>git branch * master I tried to figure out where those images are coming from, but the dom is pretty complicated, not sure if it is the canvas or an img with an empty 'src' <action-button id="u2v24" class="foam-u2-md-ActionButton- foam-u2-md-ActionButton-available foam-u2-md-ActionButton-floating-action-button noselect" style="color: rgb(255, 255, 255); opacity: 1;"> <canvas id="view14" class="halo" style="width: 100%; height: 100%" width="44" height="44"></canvas> <div id="u2v25" class="foam-u2-md-ActionButton-icon-container"><div id="u2v26" class="foam-u2-md-ActionButton-icon"> <icon id="view16"> <img class="material-icons-extended" id="view17" src="" style="opacity: 1; height: 24px; width: 24px;"> </icon> </div></div></action-button> Code loading First, as to loading extra models. You have two options, effectively: Keep them all in one file and add extraFiles=../../js/app/model/foamModels.js to your command (after the foam.build.BuildApp Or split the models out, one to a file, in a ../../js/com/foo/bar/SomeModel.js sort of hierarchy, Java-style, and then add extraClassPaths=../../js to your BuildApp command. Then you should be able to revert the hack to the foam.js load script. That should also get your code included properly in the built output, in case it wasn't already. Icons As to rendering icons correctly, you should double-check that you've actually got a <link> tag for the fonts.css file included in your HTML page, and that that file is loading correctly. Thanks! I got images now :) I'll change my hack back... did not know how classpath, extraclasspath or extrafiles works...
GITHUB_ARCHIVE
Add benchmarks Hi! I love this library and use it on many projects, so I decided to help you with making it better. I have experience in the size and performance optimizations (see my password generation library's benchmarks). I think these things are a super important in the modern frontend world and I believe that every library's goal is to provide the fastest and the smallest (in terms of the bundle size) solution. So if you don't mind I would help you with that. To be able to optimize something, we need tools to measure the size of the library and the speed of each method. So I started with providing these development tools for this repository. I installed size-limit. This library provides the easiest way to calculate gzipped library size including all dependencies. Also, you can install GitHub action to automatically check how every PR changes the library's size. To check the library size execute yarn size in the project root. % yarn size Size: 1.69 KB with all dependencies, minified and gzipped I installed benchmark.js and wrote test cases for all library methods. This solution allows us to find places that need optimization and check out how PRs/changes affect performance. To run the benchmark execute yarn benchmark in the project root. % yarn benchmark parse_____________________________________83,539 ops/s parse (parseNumbers=true)_________________82,329 ops/s parse (parseBooleans=true)________________79,727 ops/s parse (sort=false)________________________97,193 ops/s parse (decode=false)_____________________217,655 ops/s parse (arrayFormat=bracket)_______________67,130 ops/s parse (arrayFormat=index)_________________65,591 ops/s parse (arrayFormat=comma)_________________96,522 ops/s stringify________________________________152,056 ops/s stringify (strict=false)_________________219,271 ops/s stringify (encode=false)_________________373,492 ops/s stringify (skipNull=true)________________155,919 ops/s stringify (skipEmptyString=true)_________160,193 ops/s stringify (arrayFormat=bracket)__________149,923 ops/s stringify (arrayFormat=index)____________134,268 ops/s stringify (arrayFormat=comma)____________155,819 ops/s extract_______________________________28,180,181 ops/s parseUrl__________________________________83,579 ops/s stringifyUrl_____________________________115,330 ops/s Thanks for the PR. I'm happy to take the benchmark, but I'm not interested in using size-limit. Also make sure $ npm test passes locally for you. This test 👇 is crashing since I forked the repo. I think it's broken in master branch too. parse › value separated by encoded comma will not be parsed as array with `arrayFormat` option set to `comma` May I ask why you don't like the idea to have size-limit installed? It's a very popular library made by Andrey Sitnik (PostCSS and Autoprefixer author). You don't have to add it to the test flow, but I guess it's good to be able to check the library size and have an easy way compare library size before/after some changes, especially if you decide to add/change some dependency. P.S. Also I think that it will be helpful for other developers if you add the library's size in the project description. Here is an example. I had urijs installed in my project and costs us 30 KB (gzipped), so I decided to check the size of query-string and had to go to bundlephobia.com to find out the size of this library. But this site shows the wrong value (the size of the whole NPM package, not the imported code). May I ask why you don't like the idea to have size-limit installed? I try very hard to not introduce tooling when it's not important. I've added a simple badge to the readme instead. Thanks :)
GITHUB_ARCHIVE
## Introduction: Challenges to GitOps Cloud Native Testing One of the major trends in contemporary cloud native application development is the adoption of GitOps; managing the state of your Kubernetes cluster(s) in Git - with all the bells and whistles provided by modern Git platforms like GitHub and GitLab in regard to workflows, auditing, security, tooling, etc. Tools like ArgoCD or Flux are used to do the heavy lifting of keeping your Kubernetes cluster in sync with your Git repository; as soon as difference is detected between Git and your cluster it is deployed to ensure that your repository is the source-of-truth for your runtime environment. Don’t you agree that it’s time to move testing and related activities into this paradigm also? Exactly! We at Kubeshop are working hard to provide you with the first GitOps-friendly Cloud-native test orchestration/execution framework - Testkube - to ensure that your QA efforts align with this new and shiny approach to application configuration and cluster configuration management. Combined with the GitOps approach described above, Testkube will include your test artifacts and application configuration in the state of your cluster and make git the source of truth for these test artifacts. And it’s Open-Source too. For more on Testkube, check out the introduction blog, ["Hello Testkube"](https://kubeshop.io/blog/hello-testkube-power-to-testers-on-k8s). Benefits of the GitOps approach: 1. Since your tests are included in the state of your cluster you are always able to validate that your application components/services work as required. 2. Since tests are executed from inside your cluster there is no need to expose services under test externally purely for the purpose of being able to test them. 3. Tests in your cluster are always in sync with the external tooling used for authoring 4. Test execution is not strictly tied to CI but can also be triggered manually for ad-hoc validations or via internal triggers (Kubernetes events) 5. You can leverage all your existing test automation assets from Postman, or Cypress (even for end-to-end testing), or … through executor plugins. Conceptually, this can be illustrated as follows: ## GitOps Tutorial Let’s see this in action - here comes a step-by-step walkthrough to get this in place for the automated application deployment and execution of Postman collections in a local Kind cluster to test. Let’s start with setting things up for our GitOps-powered testing machine! ### Installations for GitOps Testing #### 1. [Fork the example repository](https://github.com/kubeshop/testkube-flux/fork) and clone it locally git clone https://github.com/$GITHUB_USER/testkube-flux.git #### 2. Start a Kubernetes cluster You can use Minikube, Kind or any managed cluster with a cloud provider (EKS, GKE, etc). In this example we're using [Kind](https://kind.sigs.k8s.io/). kind create cluster #### 3. Create a Github Classic Token: Must be of type [__Classic__](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#creating-a-personal-access-token-classic) (i.e. starts with `ghp_`) And export the environment variables in your terminal. #### 4. Install Flux in the cluster and connect it to the repository Install the [Flux CLI](https://fluxcd.io/flux/installation/) and run: flux bootstrap github \ #### 5. Create a Flux Source and a Kusktomize Controller The following command will create Flux source to tell Flux to apply changes that are created in your repository: flux create source git testkube-tests \ --export > ./cluster/flux-system/sources/testkube-tests/test-source.yaml And now create a Flux Kustomize Controller to apply the Testkube Test CRDs in the cluser using [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/): flux create kustomization testkube-test \ --export > ./cluster/flux-system/sources/testkube-tests/testkube-kustomization.yaml #### 6. Install Testkube in the cluster Install the Testkube CLI from https://kubeshop.github.io/testkube/installing And run the following command to install Testkube and its components in the cluster: #### 7. Create a `Test CRD` with `testkube` CLI In this example the test being used is a Postman test, which you can find in `/img/server/tests/postman-collection.json`. To create a Kubernetes CRD for the test, run: testkube generate tests-crds img/server/tests/postman-collection.json > cluster/testkube/server-postman-test.yaml Note: You can [run Testkube from your CI/CD pipeline ](https://kubeshop.github.io/testkube/integrations/testkube-automation) in case you want to automate the creation of the Test CRDs. #### 8. Add the generated test to the Kustomize file: The name of the test file created in the previous step is `server-postman-test.yaml`, add that to the Kustomize file located in [`cluster/testkube/kustomization.yaml`](./cluster/testkube/kustomization.yaml): + - server-postman-test.yaml #### 9. Push all the changes to your repository git pull origin main git add -A && git commit -m "Configure Testkube tests" #### 10. Your tests should be applied in the cluster To see if Flux detected your changes run: flux get kustomizations --watch And to ensure that the test has been created run: testkube get test NAME | TYPE | CREATED | LABELS | postman-collection-test | postman/collection | 2023-01-30 18:04:13 +0000 UTC | kustomize.toolkit.fluxcd.io/name=testkube-test, | | | | kustomize.toolkit.fluxcd.io/namespace=flux-system | #### 11. Run your tests Now that you have deployed your tests in a GitOps fashion to the cluster, you can use Testkube to run the tests for you through multiple ways: - Using the Testkube CLI - Using the Testkube Dashboard - Running Testkube CLI from a CI/CD pipeline We'll use the Testkube CLI for brevity. Run the following command to run the recently created test: testkube run test postman-collection-test And see the test result with: testkube get execution postman-collection-test-1 Test execution completed with success in 13.345s ## GitOps Takeaways Once fully realized - using GitOps for testing of Kubernetes applications as described above provides a powerful alternative to a more traditional approach where orchestration is tied to your current CI/CD tooling and not closely aligned with the lifecycle of Kubernetes applications. This tutorial uses Postman collections for testing an API, but you can bring your a whole suite of tests with you to Testkube. [Check the documentation for the available test types](https://kubeshop.github.io/testkube/category/test-types). Would love to get your thoughts on the above approach - over-engineering done right? Waste of time? Let us know! Why not give it a go yourself? [Sign up to Testkube](http://cloud.testkube.io/) and try one of our examples or head over to our [documentation](http://docs.testkube.io/) - if you get stuck or have questions, we’re here to help! Find an answer to your questions in the [Testkube Knowledge Base](https://testkube.io/knowledge-base) or [reach out to us on Discord](https://discord.com/invite/6zupCZFQbe). We’re eager to hear how you use our integrations!
OPCFW_CODE
– SkyDrive app for Windows Phone gets update, reduces free storage capacity | Engadget We are pleased to announce that the SkyDrive Pro sync client is now available for Windows and can be downloaded here. This standalone client allows users of SharePoint and SharePoint Online in Office to sync their personal SkyDrive Pro and /32164.txt SharePoint or Office team site document libraries to their local machine for access to important content on and offline. The SkyDrive Pro client can be download skydrive for windows mobile free standalone and does not require any version of Office to be installed. It can also be installed side-by-side with previous versions of Office OfficeOffice Please note if you have one of the following versions of Office installed, then you already have the SkyDrive Pro sync client and do not need to install it separately:. Happy syncing with SkyDrive Pro —cloud storage for адрес страницы For some time now, frre leaders have made digital transformation a priority. But when the pandemic hit this spring, adopting and embracing digital technology went from being a matter of…. And I know our customers feel it too. After quickly moving to remote and hybrid work models this spring, organizations foe now seeking sustainable ways to help people download skydrive for windows mobile free, читать статью productive, and prioritize their wellbeing…. Skip to main content. You may also like these articles. October 29, Read more. March 2, Official SkyDrive App For iPhone and Windows Phone Now Available For Download | Redmond Pie Download SkyDrive for Mac. Download SkyDrive for Windows Phone. Download SkyDrive for iPhone or iPad. Save my name, email, and website in this browser for the next time I comment. Google Sheets Functions Charts Sheets vs. Excel Docs. Please enter your comment! Please enter your name here. You have entered an incorrect email address! Meanwhile any documents and spreadsheets that you create on your phone can be saved to your SkyDrive storage using the Save as… option. Once saved to the cloud, SkyDrive will continue to be the default location for opening the file in question unless no Internet connection is possible. To browse the files on SkyDrive, all you need to do is open the app and select the appropriate folder. The Files list shows the full contents of your SkyDrive, from videos to music and documents and images. Everything is sorted alphabetically — there is no changing this — and a single tap will launch the chosen file with the correct application. For instance, Word documents will open in Word mobile; MP3s will open with the built-in media player, etc. There are additional settings available for managing SkyDrive, but these can only be accessed through the browser. PPTX and. ODS — quite a good option for Microsoft to provide! It is the perfect accompaniment, like gravy on chicken or Parmesan and black pepper on spaghetti Bolognese, and should therefore be the very first app you install on a Windows Phone. Prime Day steals from top brands!
OPCFW_CODE
Configuring email routing Initial configuration of the email routing application depends on the needs of the customer. In many cases, the defaults supplied in the solution definition will be sufficient. You can adjust email routing parameters in either of two ways: - To set new default values to be used in all new parameter groups that use the parameter, open Genesys Administrator Extension, navigate to Operations > Parameters, and adjust values as desired. Do not change Key Names. - To set a new value that applies only to a specific parameter group, open Genesys Administrator Extension, navigate to Operations > Parameter groups, and adjust values as desired. <multistep> Configure screening rules= You can use the eServices Knowledge Manager application (on the Aux VM) to customize the screening rules used to route email to any of the five Category parameter groups. Content screening explains how screening works in Business Edition Premise. "Screening Rules" in the eServices 8.1 User's Guide explains how to use the Screening Rule Editor and details how the rules work. |-| Configure email acknowledgements= You can also use the eServices Knowledge Manager application to customize the text of the acknowledgement emails: Email acknowledgement body open hours, Email acknowledgement body closed hours, and Email acknowledgement body special days, as well as the opening (salutation), closing, and time zone. "Using Categories and Standard Responses" in the eServices 8.1 User's Guide explains how to edit the acknowledgement content. |-| Configure distribution= After you customize the screening rules, you can adjust their associated parameter groups to distribute emails to the correct targets. If you added the term "sales" to the Category 3 screening rule, for example, you can route emails with "sales" in their subject lines to a particular target agent group by setting the Category 3 parameter group Email target value to the Sales agent group. You can also enable or disable supervisor review, and change the percentage of emails subject to review. |-| Set open hours and special days= The parameter Email open hours sets the standard hours that your office is open during the week. Use the Email special day parameter to set: - The dates or days of the week on which your office is closed for the entire day. - The dates or days of the week on which your office is closed for only part of the day. In these cases, you use the Time Ranges field in GAX to set the hours that you will be open on that date. Specific dates set in Email open hours are treated as special days. Hours set for the same date or day of the week in Email special day override those set in Email open hours; for example, if Email open hours specifies that you are open from 9AM to noon on December 31, and Email special day sets the hours of 11AM-2PM for the same date, people who send an email at 10AM on that date receive the special day acknowledgement. Similarly, in both parameters, date patterns higher in the list take precedence over those lower in the list. |-| Tune priorities= You can adjust the priority of emails using the two priority tuning parameters: - Email priority sets the initial email priority; you typically have little reason to change the default of 100. - Email overflow priority sets the priority of emails that exceed the Email target timeout. The default value is also 100, which means that emails that have already passed the overflow timeout will be re-queued based solely on their age, ensuring that the oldest emails will appear in the queue first. |-| Send a test email= To verify that Business Edition Premise is correctly receiving and routing emails, send a test email: - Install an email client and ensure that it connects to your email server. - Send an email to the address specified during the eServices SPD deployment. - Ensure that you have installed an IWS client (you did this when you were making a test call during the voice routing configuration). - If Business Edition Premise is successfully installed, an IWS interaction window containing the email appears when the email arrives.
OPCFW_CODE
A page all about our lovely Kinect and the things it can do. Kinect for Windows XP, Vista or 7 After countless attempts of trying to get the Kinect to work with windows and all the drivers, patches, programs you have to install I came across this bundle installer that does it all for you and give you some demos too and the ability to control the mouse, mouse buttons and key presses using gestures. This is the website that you can download the package and the read me. You then might need to re-load the drivers once you plug the Kinect in, this is done simply by pointing the "found new hardware" installation to 'C:/program files/prime sense/sensor/driver" for the Kinect motor. Ignore the Kinect microphone, but re-do as for the motor the Kinect sensor new hardware. Hope this helps. Chris Paton Chris-robot Libfreenect / OSX / libusb https://github.com/OpenKinect/libfreenect has the skinny on the actual driver. Basically, you do need a patched, older version of LibUSB. This was true of Cinder Kinect Block and ofxKinect as well. The homebrew install doesn't work I think but I can't be sure of that. You will need to pull the matching version of libusb for this patch. This is NOT v1.0.8, this is a change based off the repo head as of 2010-10-16. To get a tar.gz with the snapshot of the repo at this point, hit the link below. Once you’ve gotten that tarball and unziped it somewhere, patch using the files in platform/osx/. Just go to the root directory of the libusb source and run patch -p1 < [path_to_OpenKinectRepo]/platform/osx/libusb-osx-kinect.diff You need to tell configure to include some necessary frameworks: ./configure LDFLAGS=-framework IOKit -framework CoreFoundation Recompile libusb and put it wherever CMake will look (/usr/local/lib, /usr/lib, etc…). If you’re using a package manager like fink, macports, or homebrew, I’m going to expect you know what your doing and can deal with this. If not, see IRC channel. OpenNI is a new fangled system for the creation of fancy user interfaces. It accepts modules and many other things (middleware called NITE aparently). The Sensor Kinect is one such project and this was originally made by Primesense for Windows and Linux and has since been ported to OSX. Obviously, the openframeworks community got right on it and there is some pretty sneaky stuff here :) Steps to Reproduce on OSX To begin with, there are a few things that one should install. Basic OpenNI and NITE need to be installed first and these are already in beta on the page for OSX. When installing the NITE Binaries, you need to use 0KOIk2JeIBYClPWVnMoRKn5cdY4= which is the licence key (which is needed for some reason! :S ). There is source code for OpenNI and it appears for NITE as well but I've not managed to get that properly built yet. I never tried the actual samples contained in OpenNI that come prebuilt! This may have saved a lot of time. They don't work now but maybe they can be made to work. Once these are installed, we can then attack the 3 projects above in the order they are given in the instructions. # get repositories mkdir openni_dev cd openni_dev mkdir nite git clone firstname.lastname@example.org:roxlu/OpenNI.git git clone email@example.com:roxlu/SensorKinect.git # compile openNI cd OpenNI/Platform/Mac/Bin/openFrameworks ./build.sh make sudo make install # compile SensorKinect cd SensorKinect/Platform/Mac/Bin/openFrameworks ./build.sh make sudo make install #do *something* with nite cp OpenNI/Platform/Mac/Bin/openFrameworks/nite* nite/ cd nite ./nite_copy_to_openframeworks.sh ./nite_change_rpaths Compiling OpenNI can be an issue with CMake as it looks for sample/ofxOpenNI which doesnt exist. Comment this out in the CMakeLists.txt file and run the build scripts and make. It builds ok. With these two built, you'll see OpenNI_openFrameworks dir built. This contains most of the bits you need such as the libraries. It is recommended that you compile and run two test programs, Sample-NiViewer and Sample-NiUserTracker . The first should show the streams from the two sensors working hopefully if all is well. The second actually loads the middleware I believe and performs the skeleton tracking. This is the important one to get working and it didnt work until I installed the proper Nite binaries. The ofxOpenNI plugin is fairly straight forward. Move it to your apps/myapps folder and open the project. You need to add the libOpenNI.dylib that you built from the previous section. Before I installed the Mac NITE binaries, this compiled and ran but didn't work. This is a linux (and macports) system for robotics. Its been touted by lady ada and a few others as being rather cool. Probably should look into it. Some interesting resources are: There is probably more. Its installed on my VM. Its quite a large package. Is this similar to a DS display? Probably. Kinect and Vuzix Adafruit have this video showing the Vuzix and kinect working together. Again, it is relying on stereo tracking and similar.
OPCFW_CODE
A tool that allows you to manually load up CheatEngine's signed driver and get a handle to it for various kernel hacking operations. The code is well documented using comments and a short outline of what's happening is described below and as such this project is a learning resource. The project has been tested with CheatEngine 7.3. What is this? CheatEngine is a well known tool for game hacking. It features a wide variety of functionality, however, (ab)using that functionality within your own project may not be as easy. There's plenty of scenarios where one would want to use a signed driver to execute code in kernel space but getting your hands on a certificate may not be as easy. dbk64.sys - CheatEngine's kernel driver - features a ton of functionality such as kernel read/write, process interactions, and more. However, the author of CheatEngine went out of their way to lock down the signed driver so that one can not easily load it up or get a handle to it. The project allows you to do exactly that: Load up CheatEngine's signed driver and grab a handle to it. How does it work? To bypass CheatEngine's checks we try to make us as legit as possible. CheatEngine employs a couple checks to check for the integrity of the calling process. - Check whether the calling process matches a signature generated by the owner [Reference: CheckSignature] - We bypass this by starting the original executable as it's an on-disk check - Check whether the process has been tampered with [Reference: TestProcess] - We bypass this by restoring the bytes from the on-disk file - Check whether the calling thread comes from within the .text section [Reference: TestProcess] - We bypass this by making sure we spawn the threads from the .text section This task is split into a few steps: - Start the original CheatEngine process in a suspended state - We patch our shellcode into CheatEngine's entrypoint - This is faciliated by the fact that CheatEngine is loaded without ASLR - We then resume all threads - The shellcode will load our DLL - Now the loader performs a few more tasks: - Prepare the registry, namely the A, B, C and D values - Start the driver service - Copy the original bytes from the .text section into our process - Grab a handle to the driver What can I do with this? I'll give you two ideas: - Write code that can interact with the kernel. Afterall, you don't have to worry about writing your own kernel routines as CheatEngine covers most of the basics. - Write a driver manualmapper to load up your own unsigned driver without having to disable Driver Signature Enforcement. - Make sure CheatEngine 7.3 is installed. You may have to run it at least once (with kernel settings enabled) cemap.exeas administrator. Make sure loader.dllis in the same directory as - If you want to use the handle, have a look at the
OPCFW_CODE
Today I gave a quick (really quick: About 3 minutes) talk on how easy it is to build a mobile widget. To show that I ported the jQuery Cats example to a mobile widget. In the talk above I go over all the steps needed to run web code on a phone. Not shown in the slides, but it was in the talk, is that this actually runs on phones :). On a side note: It’s interesting how the boundaries between applications, websites and widgets are all disappearing: These widgets are basically downloadable webapplications, which can run totally offline (but often will interact with the data on the Internet). A advantage of the Opera platform is that they target the (still developing) W3C standard which (hopefully) leads to more interoperability between widget platforms. The SDK’s they offer are more different: Nokia offers a HUGE SDK (seriously, 622 MB install to run some HTML code?!). Also their SDK will only install (and run) in Windows, which is a pitty as a lot of webdev’s use Mac’s these days (I know it’s possible to run windows apps on a mac, but it is less convenient). Another disadvantage is the need to signup on the Forum Nokia website to download the SDK: The signup isn’t too much of a hassle, but for me it took quite long for the confirmation mail to arrive. This SDK does allow you to run widgets, but doesn’t assist in the creation of the files (not that it’s too hard to do manually, but still). In their defense: they also offer a plugin for the Aptana Studio, and this will become the preferred method of development, but I wanted to use my own editors to start with. I have also taken a look at the Aptana studio, and that looks cleaner (but stopped working at my computer after a few hours). Vodafone Betavine/Opera offers a small SDK, which consists of 3 parts. The first is a Betavine branded “widget packager”. This is a simple tool that assists in the making of the XML file, and the creation of the zipfile. It includes some sample widgets. It also allows you to add some know frameworks to your widget, but unfortunately the popular jQuery isn’t one of the included frameworks. While the process this isn’t too difficult to do manually, the wizard is a nice way to get started. Also included is the standard desktop Opera browser, that enables you to (test) run widgets. The third item in the download is a Betavine branded .sis file that can be installed on Nokia S60 phones to run the widgets. In terms of Phone integration the Nokia runtime is the clear winner: The widgets immediately work on the 5800 Xpress music I tried them on, and to the user look like other applications. The Vodafone widgets on the other hand, first required me to install the Vodafone Runtime on my phone. Even then the widgets (currently) don’t live with the other applications, but in their own, branded, folder. While this isn’t too complex, tasks like these are hard to explain to the average user. However I expect Vodafone to have their runtime pre-installed on phones they sell, and to have more device integration. Another advantage of the Nokia platform is that it has Device libraries which enable widgets to access device specific data. This includes location, calendar entries, contacts, and other data. Having this access makes the widgets a true application platform, and are a big advantage over standard (mobile) websites that don’t have these capabilities. This is something that is missing from the Opera platform. I now have a phone which has 2 widget runtimes: Both the Vodafone/opera and the Nokia one. Both are similar in what they support, and while Vodafone/Opera had a bit smoother process (for me), the Nokia SDK also works and currently offers device integration that the Opera platform lacks. The next few months we’ll see a battle between these two, to become “the one true” widget platform. My guess is that both grow towards each other: Opera will add device capabilities (starting with location), and since the Opera one is backed by the W3C, it will become the standard. Nokia will simply start to (also) support this standard, which shouldn’t be too hard: Mostly adapting a different type of config file and extension. However I do hope that standard will offer the device capabilities that Nokia currently has.
OPCFW_CODE
Introduction to Map Design - Part 4 While they’re not always necessary, labels can be an important part of many maps. Ensuring that they are legible, helpful, and well-designed can be a complex process. Spending some time learning about typography in general will help you. We’ll start by talking about placement of labels. There are many other topics to consider, like font face, and color, that we won’t get in to now. If you want to know more, you can start by exploring tools like TypeBrewer, or joining some of the conversations the cartographic community is having about map typography. If we take another look at Stamen’s maps, we can see again the positive effects of careful design. They have small and large labels, a carefully made font choice, and considerate placement of labels off to the side, or even hidden. If we look back at our ugly map, we can see that one of its biggest problems is where the labels are placed. There is no logic behind where they’re placed, nor is there any filtering. They are just randomly placed, and add nothing to the viewer’s understanding of the map. Clearly, adding text to a map in the form of labels is a large consideration, so let’s get started. Adding our Labels Before we add our labels, let’s copy the styling that we have done with our markers at the end of the last lesson. Just navigate to the CartoCSS panel, select all of the CartoCSS and use Ctrl+C/CMD+C to copy the text. Next, go ahead and use the map layer wizard. Near the bottom, you can see the option to add label text based on a column. In our case, we’re interested in mag. Once you select the mag column, you can see that the labels are pretty pointless, like the labels in our ugly map. There are too many of them, and they don’t communicate anything to the viewer. Let’s fix this by fine-tuning our CartoCSS. You’ll notice that there’s a section formatting the markers, and one formatting the labels. Let’s go ahead and replace the section formatting the markers with our copied CartoCSS by deleting it and pasting in the CartoCSS from where we left off in Lesson 3. If you apply the style, you’ll see that not much seems to change. The labels are just too obtrusive to allow us to see any of the underlying markers. To fix this, let’s go ahead and add rules to when labels appear on the map. Let’s say we only want to see the mag label for large earthquakes. To do so, we would want to use some of the skills we learned in Lesson 3. We would go ahead and add conditions to the CartoCSS formatting that tell CARTO to only sometimes display the labels. See if you can decipher the block of code below: Here, we’re telling CARTO to only display the labels when the mag is greater than or equal to 5, and zoom is larger than 6. That way, we only see the mag measurement for the largest earthquakes, and only when we’re zoomed in enough for the label to make sense. Notice that we can chain together conditions (like zoom level, or mag value) by just including them side-by-side, without characters in between them. We can also describe multiple condition pairs in which we would want the labels to be displayed. For example, we want to see labels when the mag is at or above 6 and the zoom is above 6, OR when the mag is above 5 and the zoom is above 7. To do that, we just separate condition chains with commas. Our code, then, would look like this: You can see that we have four pairs of conditions, which, if met, would mean that a label is displayed. In this case, more and more labels are displayed the more zoomed-in we get. This allows us to preserve readability when we’re zoomed out, but include a great deal of data for when we’re zoomed in. You can edit other attributes of labels using CartoCSS like this, so feel three to play around! Remember that you can’t “break” your map by tinkering with the CSS, and can revert to the standard wizard whenever needed. You can also copy and paste old CartoCSS like we did earlier in this lesson, in order to preserve your work. Armed with these tools, go forth and build beautifully designed maps!
OPCFW_CODE
The following scenario is typical in organizations where decision makers choose tools based on an immediate tactical need: We're going to deploy the Acme Inc. configuration management tool because one of our flagship projects desperately needs to address some immediate issues with version control. Although there are more comprehensive tools on the market, the Acme product is cheaper and it will perform the job. By morning, the problem will be solved. Although this approach satisfies a short-term need, it often leads organizations to amass tools that overlap in capability, that fail to integrate, and that place a burden on the IT department to maintain, as new tool versions become available. This article highlights five themes to help you select a strategic toolset, rather than a collection of tools that present challenges such as overlapping functionality or lack of integration. IT professionals use tools to automate manual processes at various points within a typical software supply chain. Tools might be business-relevant (such as enterprise architecture and portfolio management tools), development-relevant (such as requirements management, design management, coding and test management tools) and operations-relevant (such as release management and monitoring tools). Typically, teams select a toolset according to criteria such as the features available, the platforms supported, the ability to address scalability and security, and the flexibility to support different development lifecycle approaches; for example, waterfall, iterative, agile development, and disciplined agile development. However, when different departments in an organization choose the tools they need and do not consider the bigger picture, silos of capability develop. It becomes inefficient or impossible to transfer tools and work between departments. Although individual tools can add value, consider the possibility that an integrated toolset offers more value. Consider the tool integrations shown in Figure 1. The integration between test management and requirements management can make it possible to link a test case to a requirement. This information can be used to determine the level of test coverage against the defined requirements and can help identify a need for additional resources if the coverage is less than adequate. Figure 1. Integration examples Another example is the integration between design management and configuration and change management, where designs are version-controlled in the configuration and change management solution. A large number of design changes can indicate areas of instability in the designs and can highlight the need to focus on these areas accordingly. An integrated, strategic toolset can provide a level of visibility and transparency of project progress, and can lay the foundation for an effective approach to governance. In many cases, the ability to report management information is enabled through an integrated toolset. Many organizations also view the toolset as the key component of an integrated supply chain, as shown in Figure 2. This view has become commonplace given the increased interest in agile development, DevOps (the processes that link development and operations tasks), and continuous delivery. Figure 2. An integrated supply chain An integrated supply chain is an effective way to look at the landscape, because it reveals two gaps that are often present: - The gap between the business and IT (the business-IT gap). To address this gap, look at the following areas: - Consider the integration between "above project" tools (such as those used to support enterprise architecture and portfolio management) and "project-specific" tools such as those used to support requirements management and design management. - Consider the use of agile methods (where, for example, an IT project includes a business representative directly in project development) and the associated tools that provide support for agile development, in addition to traditional development lifecycle approaches. - The gap between development and operations (the IT-IT gap). To address this gap, look at typical DevOps solutions, such as processes and tools to support continuous testing, continuous release, and continuous monitoring and optimization. This supply chain thinking and the implied integrations can support strategic decision-making. For example, an organization in a regulated environment (such as those in the financial or pharmaceuticals sectors) needs to determine the effect of a new mandate. To determine the cost of the changes required by the mandate and to take appropriate action, the organization needs to look at the links between the current requirements, the designs derived from these requirements, the codebase derived from the designs and so on, all the way through the supply chain. Without ready access to this information, the organization has to rely on some form of "software archaeology" to manually search documents and other data sources, and to interview practitioners who might have insight. An integrated view of the toolset makes it easy to trace requirements to the elements that implement them and ensures that IT is aligned with product strategy, portfolio decisions, and enterprise architecture principles and standards. Recognize that heterogeneity is the norm In most cases, the tools environment used in day-to-day work is heterogeneous – a suite of tools from several different vendors. For example, the tools for word processing, spreadsheets, presentations, email, web browsing, and other purposes typically come from a variety of vendors. The situation is similar for the integrated development tools environment. Typically, these environments consist of tools from many different vendors. Figure 3 illustrates products from six different vendors (A, B, C, D, E, and R). Figure 3. A heterogeneous environment When you select a strategic toolset, remember to consider the tools and the integrations available. If you have to build or buy missing integrations, the cost of the overall strategic tools architecture can increase substantially. Open platforms and standards foster an ecosystem in which IBM products, partner products, open source products, and products from other vendors can work harmoniously. The Jazz platform, an integration platform for software development tools, transforms software and systems delivery and makes it more collaborative, productive, and transparent, through integration of information and tasks across the phases of the lifecycle. Another example is the Open Service for Lifecycle Collaboration (OSLC) standard, which provides specifications to integrate tools from different vendors. Consider the total cost of ownership When you select a strategic toolset, remember that the successful introduction of any toolset goes beyond the cost of the tools. Consider this simple example: a team decides to use an open source tool because the cost of purchase, when compared with commercial tools, is negligible. Or is it? Although some open source tools are quite excellent, others require additional investment that is not immediately obvious. Several elements, in addition to initial purchase costs, affect the total cost of ownership. The framework shown in Figure 4 provides a comprehensive view of a development environment. The developerWorks article Define the scope of your development environment includes more details. Figure 4. Development environment definition This framework considers six key elements of a development environment: method, tools, infrastructure, organization, enablement, and adoption. Each element adds cost to the total cost of ownership. For example, in addition to the cost of any tools, consider the cost of the following tasks: - Create method content. - Configure the tools to accommodate the defined method. - Provision the infrastructure required by the tools. - Perform any data migration from an existing toolset. - Create and provision enablement (training and mentoring). - Provide resources to act as the first-level support for the development environment. Look for a transformation partner, not simply a technology provider When you acquire a tool, you enter into a relationship with the vendor. For example, you are dependent on the vendor's support channels and dependent on any product fix packs and future releases. When you make strategic tool decisions, think of the tool vendor as a transformation partner, rather than only a technology provider. You are, in essence, choosing both a tool and a vendor. Consider whether the vendor organization is solvent and stable. Does it have a global presence with local support? Does it have an active user community? Does it provide a developer network? Other areas to consider are less obvious. Does the vendor provide you with a toolset and leave you to implement it, or do they also have a services organization that can accelerate the time-to-value from the investment made in tools? Do they have proven frameworks and approaches to help you transform your capabilities? Do they emphasize how to make the change meaningful to practitioners? Do they have the case studies and references that you expect? Take a value-driven approach The cost to acquire a strategic toolset has to be balanced against the value to be gained. The cost and value contribute to the return-on-investment element of a business case for any investment in tools. At the highest level, value statements are often aligned with faster, cheaper, better objectives, such as improved practitioner productivity (faster), reduced development cost (cheaper) and higher quality solutions (better). However, the adoption of any toolset can stall if some simple elements are not in place. A simple framework that focuses on these elements is provided by John Kotter, in his book Leading Change. Two of the steps in the Kotter framework are directly focused on value. The first step is to establish a sense of urgency (in this case, an urgency to invest in tools). Make it clear that when the organization addresses the urgency, it receives tangible business value. Any compelling reason to act often comes about as the result of a crisis, potential crisis, or significant opportunity. The goal is to remove the complacency of the organization to stay with the current state of things. Most organizational change initiatives fail at this step and the adoption of a strategic toolset is no different. The second value-focused step is to generate short-term wins. Belief in a vision does not last forever; evidence that the introduction of a strategic toolset delivers tangible results is the only way to ensure that people stay committed to make the changes. Deploy changes to capabilities in incremental stages. Each increment adds value and can be implemented in a reasonable amount of time. Plan to deploy incremental changes according to the priorities of the organization and the value to be gained. When you choose a strategic toolset, apply the five themes described in this article. If you consider all these factors, your tool selection is more likely to result in a fit-for-purpose environment that maintains its strategic value in the long term. - Explore the Rational software area on developerWorks for technical resources, best practices, and information about Rational collaborative and integrated solutions for software and systems delivery. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Improve your skills. Check the Rational training and certification catalog, which includes many types of courses on a wide range of topics. You can take some of them anywhere, anytime, and many of the Getting Started ones are free. Get products and technologies - Download a free trial version of Rational software. - Evaluate IBM software in the way that suits you best: Download it for a trial, try it online, or use it in a cloud environment. - Check the Rational software forums to ask questions and participate in discussions. - Get connected with your peers and keep up on the latest information in the Rational community. - Ask and answer questions and increase your expertise when you get involved in the Rational forums, cafés, and wikis. - Rate or review Rational software. It's quick and easy. - Share your knowledge and help others who use Rational software by writing a developerWorks article. Find out what makes a good developerWorks article and how to proceed. - Follow Rational software on Facebook, Twitter (@ibmrational), and YouTube, and add your comments and requests.
OPCFW_CODE
# -*- coding: utf-8 -*- """ Created on Wed Aug 11 11:55:14 2021 @author: ansegura """ # Import libraries import yaml import tweepy from requests.exceptions import Timeout, SSLError, ConnectionError from requests.packages.urllib3.exceptions import ReadTimeoutError, ProtocolError from pymongo import MongoClient ###################### ### CORE FUNCTIONS ### ###################### # Util function - Read dict from yaml file def get_dict_from_yaml(yaml_path): result = dict() with open(yaml_path) as f: yaml_file = f.read() result = yaml.load(yaml_file, Loader=yaml.FullLoader) return result # Twitter function - Read Twitter API authentication credentials def get_twitter_auth(): # Read twitter bot credentials yaml_path = '../code/config/credentials.yml' twt_login = get_dict_from_yaml(yaml_path) # Setup bot credentials consumer_key = twt_login['consumer_key'] consumer_secret = twt_login['consumer_secret'] access_token = twt_login['access_token'] access_token_secret = twt_login['access_token_secret'] # Authenticate to Twitter auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) return auth # Twitter function - Fetch tweets list from a specific user # Note: Twitter only allows access to a users most recent 3240 tweets with this method def get_all_tweets(api, screen_name): all_tweets = [] # Make initial request for most recent tweets (200 is the maximum allowed count) try: new_tweets = api.user_timeline(screen_name = screen_name, count=200, tweet_mode='extended') # Save most recent tweets all_tweets.extend(new_tweets) # Save the id of the oldest tweet less one oldest = all_tweets[-1].id - 1 # Keep grabbing tweets until there are no tweets left to grab while len(new_tweets) > 0: # All subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name = screen_name, count=200, tweet_mode='extended', max_id=oldest) # Save most recent tweets all_tweets.extend(new_tweets) # Update the id of the oldest tweet less one oldest = all_tweets[-1].id - 1 except (tweepy.TweepError) as e: print('Error 1:', e) except (Timeout, SSLError, ConnectionError, ReadTimeoutError, ProtocolError) as e: print('Error 2:', e) # Transform the tweepy tweets into an array that contains the relevant fields of each tweet tweet_list = [] for tweet in all_tweets: new_tweet = { 'id': tweet.id_str, 'created_at': tweet.created_at, 'message': tweet.full_text, 'lang': tweet.lang, 'hashtags': [ht['text'] for ht in tweet.entities['hashtags']], 'user_mentions': [mt['screen_name'] for mt in tweet.entities['user_mentions']], 'retweet_count': tweet.retweet_count, 'favorite_count': tweet.favorite_count, 'retweeted': tweet.retweeted, 'source': tweet.source, 'display_text_range': tweet.display_text_range } tweet_list.append(new_tweet) return tweet_list # Util function - Upsert documents into MongoDB def mongodb_upsert_docs(mdb_login, doc_list): # Login client = MongoClient(mdb_login['server'], mdb_login['port']) db = client[mdb_login['db']] coll = db[mdb_login['collection']] total_docs = coll.count_documents({}) print ('', coll.name, "has", total_docs, "total documents.") # Upsert documents for doc in doc_list: coll.update_one( {"id" : doc['id']}, {"$set": doc}, upsert=True ) ##################### ### START PROGRAM ### ##################### if __name__ == "__main__": # 1. Create Twitter API bot auth = get_twitter_auth() api = tweepy.API(auth) api.verify_credentials() print(">> Authentication OK") # Show user account details tw_user_name = "@seguraandres7" user = api.get_user(screen_name=tw_user_name) print(">> User details:") print('', user.name) print('', user.description) print('', user.location) print('', user.created_at) # 2. Fetching tweet list from a specific user tweet_list = get_all_tweets(api, screen_name=tw_user_name) # 3. Upsert tweets into MongoDB yaml_path = 'config/credentials.yml' mdb_login = get_dict_from_yaml(yaml_path) mongodb_upsert_docs(mdb_login, tweet_list) print('>> tweets upserted:', len(tweet_list)) ##################### #### END PROGRAM #### #####################
STACK_EDU
Richard Harper is Principal Researcher at Microsoft Research in Cambridge and co-manages the Socio-Digital Systems group. Richard is a sociologist concerned with how to design for 'being human' in an age when human nature is often caricatured or rendered in oversimplifying ways. This is particular so in relation to human communication, where the goals and experiences enabled by new technologies are often badly misconceived by communications companies and scientific observers alike. His 10th book, Texture: Human expression in the age of communications overload (MIT Press, awarded the Society of Internet Researcher's 'Book of the Year, 2011)', deals with this in particular. His current research focuses on the social organisation of 'gaze' in video-based messaging (such as provided by Skype), how to attain greater expressive richness in lightweight messaging platforms and the problem of redefining the abstractions used to articulate digital entities ('files' and such like) in cloud-centric computational infrastructures. Amongst his prior books was the IEEE award winning The Myth of the Paperless Office (MIT Press, 2002), co-authored with Abi Sellen, and Inside the IMF (Academic Press, 1997). In 2011 he published The Connected Home: the future of domestic life (Springer, Dec, 2011). His latest collection, Trust, Computing and Society (2014) seeks to redress the febrile tenor of much of the discussion on trust and computing, and offers instead, it is hoped, a more balanced interdisciplinary view that will foster insightful debate between computer scientists, philosophers, sociologists, and designers, all of whose perspectives are represented in the book. He has just finished a monograph (with Dave Randall and Wes Sharrock) called Choice: The science of reason in the 21st Century (Polity Press). This considers what it means to claim that human choice can be predicted, how such claims are used in the engineering of commonplace technologies like search engines, and the relationship between these claims and disciplinary distinctions in the social sciences. It explores the relationship between these distinctions and everyday contexts of choice, with particular concern for the changing landscape of choice in the age of the Internet. Essays on some of his work on this and other topics can be found in his blog. His work is not only philosophical and sociological, but also includes the design of real and functioning systems, for work and for home settings, for mobile devices and for social networking sites. Numerous patents have derived from his work. Particular foci have been new forms of messaging, as illustrated by Glancephones and Wayve Devices, and this echoes his current concern with Skype; exploratory control devices for cloud-based interaction such as the Cloud Mouse which in turn reflects his research on new file types, reflecting social networking social practices (see, for example, his paper, What is a file?). Prior to joining MSR, Richard helped lead various technology innovation and knowledge transfer companies, while in 2000 he was appointed the UK’s first Professor of Socio-Digital Systems, in the Sociology Department at the University of Surrey, England. It was here he also set up the Digital World Research Centre. Prior to this he was a researcher at Xerox PARC's fifth lab, EuroPARC, in Cambridge. In 2011 he was elected a Fellow of the Royal Society of Arts. He became a Fellow of the IET in 2010. and a member of the CHI Academy in 2014. He lives in Cambridge with his wife and three troublesome but occasionally delightful children.
OPCFW_CODE
Last July, Hotmail clients were blessed to receive a fresh out of the box new refresh from Microsoft. It changed the UI from the Hotmail interface clients were familiar with into Outlook.com. From Hotmail to Outlook, the switch left numerous inquiries on its trail. To existing Hotmail clients, the change can be very befuddling. In any case, taking into account that the email benefit has had a few names since Microsoft got it from its unique makers in 1997, they’re likely used to Live Mail, Outlook, and Outlook Web App utilized reciprocally with Hotmail. To non-Hotmail clients, the refresh comes as a noteworthy amazement. A large portion of them have consigned the email benefit into the classification of « where are they now »? Evidently, Hotmail isn’t dead the same number of accepted. It was basically the UI that has been put to rest. Everything else, including your Hotmail login, is the equivalent. Hotmail is an email benefit that matches any semblance of Google Gmail and Windows Live. It used to be an effective elective free email benefit yet its fame was dominated by Gmail. In any case, Hotmail clients clutched it since it offers: Invigorated enemy of spam and hack security programming A large group of Microsoft administrations, for example, Skype, Windows Live ID, and Xbox Live. Hotmail Login Update The ongoing refresh of Hotmail was intended to enhance the interface for a superior client encounter. The spic and span Outlook interface is increasingly proficient, smoother, and gives clients more authority over email the board. Existing Hotmail clients have the alternative to hold their old Hotmail account since it will in any case work. A great deal of Hotmail clients were concerned that they can never again get to their record with the move to Outlook. Be that as it may, this isn’t the situation. The main thing that is changed is the Hotmail landing page. To sign in to your Hotmail account: Type the location of the Hotmail login page on a program. Note: You will be consequently diverted to the Microsoft free close to home email login page. Type your Hotmail email account in the Microsoft sign in box. Note: Because your Hotmail account is one of the Microsoft accounts, alongside Microsoft Free Personal Email Account and Live Account, you can utilize your Hotmail login on the Microsoft sign in box. Tap the Sign in catch. Once signed in, you’ll be diverted to the Hotmail landing page. In the event that your sign-in falls flat, it can mean one and two things—your email or potentially secret phrase is mistaken. The individuals who would like to move to outlook.com must refresh their Hotmail. Sign in to your Hotmail account Snap Options, at that point pick Free Upgrade to Outlook.Com The change won’t influence your email address and your secret key and old messages will be spared. The individuals who need to make a Hotmail account presently will get the new Outlook interface. Make a Hotmail or Outlook account Go to ww.login.live.com Snap Create a New Account Give every one of the subtleties required to finish the procedure Pick a login—by means of another email address or telephone number, and after that affirm your new record.
OPCFW_CODE
[jvm-packages] ArrayIndexOutOfBoundsException in XGBoosterGetModelRaw at the end of training on Spark Training is failing with ArrayIndexOutOfBoundsException somewhere next to the end of training (estimated by execution timing). Environment and versions: 0.8 on Databricks 4.1 ML Beta (includes Apache Spark 2.3.0, Scala 2.11) 0.8.3 on Databricks 4.2 (includes Apache Spark 2.3.1, Scala 2.11) 1 Master and 2 Worker nodes: 140.0 GB Memory, 20 Cores After some data preparation on R on master, we are passing data to Scala runtime through temp tables. Final dataset size is about a couple of gigs with 183 columns in total. Training code for 0.8: import ml.dmlc.xgboost4j.scala.spark.XGBoost val xgbParam = Map("booster" -> "gbtree", "objective" -> "binary:logistic", "eta" -> 0.001, "gamma" -> 4, "max_depth" -> 20, "min_child_weigh" -> 10, "nthread" -> 20, "subsample" -> 0.8, "colsample_bytree" -> 0.7, "scale_pos_weight" -> 36, "max_delta_step" -> 2) val s = System.nanoTime val xgbClassifier = XGBoost.trainWithDataFrame(training_ml, xgbParam, nrounds, numWorkers) println("time: " + (System.nanoTime-s)/(1e9*60) + "s") Almost the same code used for 0.8.3: import ml.dmlc.xgboost4j.scala.spark.XGBoostClassifier val xgbParam = Map("booster" -> "gbtree", "objective" -> "binary:logistic", "eta" -> 0.001, "gamma" -> 4, "max_depth" -> 20, "min_child_weigh" -> 10, "subsample" -> 0.8, "colsample_bytree" -> 0.7, "scale_pos_weight" -> 36, "max_delta_step" -> 2) val xgbClassifier = new XGBoostClassifier(xgbParam) .setFeaturesCol("features") .setLabelCol("label") .setNumRound(10000) .setNumWorkers(numWorkers) .setNthread(20) val s = System.nanoTime xgbClassifier.fit(training_ml) println("time: " + (System.nanoTime-s)/(1e9*60) + "s") Both fails with the following error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 207, <IP_ADDRESS>, executor 1): java.lang.ArrayIndexOutOfBoundsException at ml.dmlc.xgboost4j.java.XGBoostJNI.XGBoosterGetModelRaw(Native Method) at ml.dmlc.xgboost4j.java.Booster.toByteArray(Booster.java:435) at ml.dmlc.xgboost4j.java.Booster.writeObject(Booster.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1128) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:397) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver log is omitted for clarity. Final notes: The same code works fine for 50, 100 and 1000 numRounds and fails on 10000. It also works on a dataset with 159 columns, but fails here with 183. Spark cluster started with spark.task.cpus=20, but it looks like having no effect as the same error is hit on 40 1-threaded workers. Any ideas on a way to fix it? Adding [jvm-packages] so that contributors to JVM packages will notice them. We never released any version of 0.8.3...the next one should be 0.81...ask Databricks if it’s some customized version
GITHUB_ARCHIVE
Azure Log Analytics Pwning your logs and system-wide alerting As the compute and service resource offerings have matured, Azure has begun to add features to consolidate the UI/UX of its support tools. One of my favorite tools in that toolbox is Azure Log Analytics, built on top of a newly integrated query language from Application Insights (another great tool, this one powering analytics for web applications). In addition to using the new query language, Log Analytics has just this month begun to move major functionality from the operations Management Suite (OMS) directly into the Azure Portal. Let's take a look at what this tool can do – now that it is accessible right from Azure. Previously, the Log Analytics insights, charts, and query results were confined to the OMS workspace. Now, you can use them in a number of places in Azure. The most visible feature in the Azure Log Analytics are the charts and graphs. Fully customizable, you can chart just about anything you can think of. These charts are backed by the query language to give you the ability to bring immediate visualization to the queries you run most often. And because you can edit and create them on the fly, you can easily add new charts if you're watching a specific task or have a new reporting mandate. Overview or Dashboard (I'll take both, please) Azure Log Analytics allows you to define an overview and a dashboard for specific use domains. This gives you the ability to define tiles that give you important information at a glance, for use on a large display monitor (YESSSS!) or just quick checking. If you see something amiss or something you need to check, click right on the tile and you're right into the dashboard with detailed information and charting about that subject. In fact, Azure has already defined and pre-packaged a number of use domains into configurable presets called: Solutions cover many of the common use cases for centralized logging and alerting. Pictured above is the solution for Change Tracking. Any changes that are made to servers enrolled in my log analytics workspace show up in this solution dashboard, giving me a great way to stay on top of a large constellation of servers and services. Finally, you can now query logs across your enrolled systems (VMs, on-premise machines, Azure services) from a single centralized place. From the query results, you can edit and refine the query or dump it right to a text file to run the query in PowerBI (opening up limitless possibilities for visuals). More to come in 70-533! If you're looking for more, I'm working on the latest Azure content that will be released among the 150+ new hands-on training coming in July! watch for my upcoming course and tune in to the live show on July 31st at 10:30AM CDT if you have any questions for me! The post Azure Log Analytics appeared first on Linux Academy Blog.
OPCFW_CODE
Each week so far it takes me over 1.5 hours to bring my dev environment up to date after updating the plutus-pioneer-program and plutus-app repos. This is really painful. I'm wondering if someone can show me how to improve this startup time - possibly avoid unneeded steps, suggest some optimization settings (if any exist)... I've listed my environment update process below. Any advice on how to streamline this and reduce my startup time would be much appreciated. Note I'm running on Intel Mac (BigSur), I'm in Cohort 3, I'm new to nix and cabal and for the first two weeks I had trouble getting the playground up reliably so after much experimenting I'm down to the following process that seems to work, but it takes a long time! Summary of steps I need to do each week: - open a terminal and run from my PPP code directory: cd plutus-pioneers-program git pull origin plutus-pioneers-program // gets main branch - look inside code/weekxx/cabal.project and find latest plutus-app commit branch -> say ABCDEF - cd to plutus-app directory and do the following a) git pull origin plutus-apps // gets main branch b) git checkout ABCDEF // to align plutus-apps code with this week's ppp commit c) nix-build -A plutus-playground.client d) nix-build -A plutus-playground.server e) nix-build -A plutus-playground.generate-purescript f) nix-build -A plutus-playground.start-backend Generally these first steps are reasonably fast although sometimes the nix-builds can take up to 5 min each. - Then I run nix-shell -v // -v gives verbose output... nice to watch that it is doing something this can take a long time each week: 30-60 min - within the shell I run cd ../plutus-playground-client plutus-playground-generate-purs // this can take a while GC_DONT_GC=1 plutus-playground-server // start server - open a new terminal/tab at plutus-app root and run nix-shell -v // again generally the second nix shell is faster - completing in 5-10 minutes. - within this shell cd ../plutus-playground-client GC_DONT_GC=1 npm start // to start the playground ui - open a 3rd terminal/tab at plutus-app root and run nix-shell -v // again for cabal repl similar startup time to 2nd shell - within this 3rd shell access cabal repl with cd ppproot/plutus-pioneers-program/code/weekxx // where xx is week # like 03 cabal repl this takes another 30 minutes to load/build all the dependencies. Do I really need them all? At this point I finally have a stable playground to use !! - sometimes I also open another nix-shell and boot up the doc server if I need that.
OPCFW_CODE
Small values with non-uniform distribution encoded as large values with uniform distribution Given a non-uniformly distributed set of 32-bit values (for example), is there a way to reversibly encode each one as a 128-bit value (for example) where they'd be approximately uniformly distributed in that larger 128-bit space? Block ciphers do this as I understand it but they require a key and are computationally expensive due to security concerns. If I didn't want security, just the property of approximate uniform distribution and increased size, is there a more efficient way? Do you know the distribution? What you want sounds a lot like an admissible encoding, but that requires a uniform distribution over the smaller set. LFSR based Since you want some simple not necessary secure than here one solution based on the LFSRs. Let assume that each of your inputs has a length of 32-bit. Now, choose a maximal binary LFSR of length 128 ( yes not 32). Fill the LFSR's first 96-bit with 0s and the last 32 with your data. Now, run the LFSR $x$ times where $x>128$. Now take the next 128-bit as the mapping of 32-bit to 128-bit. Maximal LFSR's with length $L$ generates all binary sequences of length $L$, except the all-zero case, in their periods. Your data will be the starting point on the sequence of length $2^{128}-1$ and the values will be not the same. You can get your value back by running the LFSR in reverse. This is easy since you know the tap points. Otherwise, you will need 256-bits to construct the LFSR with the Berlakamp-Massey algorithm. OTP based A more simple solution based on OTP; Generate a random 128-bit, and x-or you inputs by repeating 4 times. You did not want security, and this can map your input into 128-bit random. You can get your value back by x-oring the 128-bit key and taking the first 32-bit. Block cipher based Block ciphers are a family of permutations and with a fixed key, you select only one permutation from the family. Otherwise, selecting a random permutation and storing is infeasible, you need $2\cdot 2^{128}$ space to store. With the key, you can transfer back and forth. Also with the expected PRP of the block cipher, you can achieve good randomness, too. AES is quite fast, especially with AES-NI can reach GB/s. Fix a key and use AES. This is quite easy since you can use OpenSSL tools to achieve this.
STACK_EXCHANGE
Very new to the AD piece, so your assistance and patience is greatly appreciated. I'm currently building a Development and Test environments and utilizing a single AD (2008 R2). Within each environment, there are specific: - Test Users The intention is for users to log on, using their standard logon, to access the local domain workstation. To access the DEV/TEST environments, specific (approved) users will access the machines by RDP using separate credentials. I'm attempting to setup an OU for each environment, and have only those users and computers, within each OU to have permission to RDP and access each environment/machine. I don't want other 'general' domain users to have access to these machines, with the exception of Domain Admins. I've been attempting to do this with no success. Nothing seems to work and I'm wondering if it's just a simple setting I'm missing. Thank you in advance and look forward to hearing from you There's no compelling need for separate credentials (unless you really want them) RDP access is typically done by security groups. Create a security group (developers for example) Add the appropriate users to the security group. Then permit remote access to the machines in dev to the developers security group. You can make it as granular as you need to. Domain Admins have RDP access by default. Has a good discussion on GPOS for this - If you create a GPO for each computer group, and then use security groups to control remote access to the computer group, you're good to go. Thank you so much for your quick response. Unfortunately, there is a requirement to only allow specific users to access specific machines. As for the ' Add the appropriate users to the security group. Then permit remote access to the machines in dev to the developers security group'. I've something similiar to this, by creating the security group and adding that group to the builtin - 'Remote Desktop Users'. Can this be done locally on the machine, only allowing specific users to logon? Thank you and look forward to hearing from you Are you talking about local logins now or remote ones? Remote should deny all except those in the allow list. by default standard users cannot RDP to any machine unless they are added to remote desktop users. To do it from GPO here is the screenshot If you have only a few servers and test PCs, you can also manually set a "allow logon locally" & "allow RDP login" in the local security settings of each server and remove domain users so that only administrators and the users (or groups) can access the servers. For example, if only 3 servers....in the local security policy - Test-Svr-A only allow RDP for Group "TS-A" that have Mr A, Mr B & MR C - Test-Svr-B only allow RDP for Group "TS-B" that have Mr E, Mr F & MR A - Test-Svr-C only allow Group "TS-C" that have Mr A, Mr B & MR E Then in DC, create Group "TS-A", Group "TS-B" & Group "TS-C" and put in the appropriate users
OPCFW_CODE
We continue the Data Visualization Weekly initiative to let you learn about new and interesting dataviz examples on a regular basis. This article showcases another four of them that might serve well for inspiration or simply help you get a better understanding of some facts and processes taking place out there in the world. Data Visualization Weekly: September 8, 2017 – September 15, 2017 Cost of Natural Disasters in America The New York Times’ The Upshot created an impressive visualization displaying the costs of the most expensive natural disasters that have taken place in the US since the year of 1980. The article named “The Cost of Hurricane Harvey: Only One Recent Storm Comes Close” tells first about Hurricane Harvey which hit Houston just recently. Estimated at $70-$108 billion, Harvey is likely to become the second most expensive hurricane. Only Katrina in 2005 was worse with $160 billion. Check out the interactive chart and the article on The Upshot to learn more and compare the most devastating natural disasters by cost in 2017 US dollars. Visualizing Data About Natural Disasters and Reported Deaths Worldwide The Economist also shared charts visualizing data about natural disasters and their impact. The publication focused on global data and, in particular, found out that although the number of disasters increases, the number of human deaths they cause is falling due to various safety measures getting improved day by day. The world map at the beginning of the article shows the number of natural disasters that happened around the world during the period between 1995 and 2015. The small line chart just below displays their total against the total number of deaths they caused, starting from 1900. There is also a stacked column chart in the article that supplies insights into the nature of disasters that took place between the years of 1980 and 2016: meteorological, hydrological, and climatological. US Pension Fund Problems Visualized Bloomberg visualized the US public pension funding ratio data by state. And now we can clearly see how dangerous the situation in that field is out there. The map chart in the article graphically represents each state’s pension funding ratio. In fact, only DC and six states have managed to narrow the funding gap within the latest time. Visualizing Data About Dynamics of Migration to Britain The Economist is one of our favorite publications, and we are glad to mention its other dataviz here. This one is about migration to Britain clearly falling down after the vote of the country to leave the EU. The line chart here shows how many people have come to Britain from EU8, EU15, and EU2 along with the total across all the EU countries during the last decade. That’s all for today. For more cool examples of visualizing data check out our previous Data Visualization Weekly articles and stay tuned to the next issues. To create your own beautiful charts, see the AnyChart documentation and API.
OPCFW_CODE
Quick Start on Windows Hi. A beautiful effort is being put on this library. I really appreciate the goal and vision of this project. Trying to give it a first try, by following the simple-to-read guide: https://nappgui.com/en/start/quick.html I struggled a bit with it. It wasn't as straight forward as it first seemed by reading. While running cmake -S ./src -B ./build, it won't just compile, Unknown compiler and such. Previous steps were required. The first thing to learn is the incompatibility with MSYS2/MinGW. Therefore, I had to download 10Gb of Visual Studio Community, which is over bloated to my taste. Then cmake for windows as instructed, avoiding the use of msys2 installed one. The second thing to learn, not being a CMake expert, is that it should be setup with cmake-gui. Picture attached. Or alternatively, with this command. ref: cmake -G "Visual Studio 17 2022" -A Win32 -T v143 So the sequence of the guide is missing one step git clone --depth 1 https://github.com/frang75/nappgui_src.git cd nappgui_src cmake -G "Visual Studio 17 2022" -A Win32 -T v143 cmake -S ./src -B ./build cmake --build ./build --config Debug Once compiled, I don't get the expected demo/example binaries in the Debug folder. And I got to this point. Not sure why yet. I downloaded the example binaries and they look very nice. I really like how they are barely 400kb. The third thing to learn, I readed somewhere about a command msbuild NAppGUI.sln which is called from src folder. When VS was installed it didn't placed anything on the path, so I had to manually add it. C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin, after that, restarted the CMD Console and went back to the folder. cd src msbuild NAppGUI.sln Finally, got the examples binaries in src/Debug/bin/... Buttttt, they are about 3Mb, not 400kb. I guess that some deep compiler optimization is required and still missing on the sequence. Also, I haven't opened the +10Gb of Visual Studio. Is it really required? Hope this feedback helps. Kind regards, Hi @DiegoJArg. Thanks for use NAppGUI and for the feedback! For compile NAppGUI in Windows, the command sequence is: git clone --depth 1 https://github.com/frang75/nappgui_src.git cd nappgui_src // -G "Visual Studio 17 2022" force to use this version of VS. Can be omitted if you have only one version installed. // -A Win32 compile for Windows 32 bits (64 bits is default in VS 2022). // -T v143 is not required (is the default toolset in VS2022) cmake -G "Visual Studio 17 2022" -A Win32 -S ./src -B ./build // This will build in Release mode (with all speed and space optimizations) // You have three configurations Debug, Release, ReleaseWithAsserts cmake --build ./build --config Release After compile, here you have all executables. cd build cd Release cd bin dir 08-Feb-23 21:58 <DIR> . 08-Feb-23 21:58 <DIR> .. 08-Feb-23 21:58 380,928 Bode.exe 08-Feb-23 21:58 312,832 Bricks.exe 08-Feb-23 21:58 414,720 Col2dHello.exe 08-Feb-23 21:58 312,832 Dice.exe 08-Feb-23 21:58 320,000 Die.exe 08-Feb-23 21:58 343,552 DrawBig.exe 08-Feb-23 21:58 379,392 DrawHello.exe 08-Feb-23 21:58 673,280 DrawImg.exe 08-Feb-23 21:58 314,880 Fractals.exe 08-Feb-23 21:58 715,776 GuiHello.exe 08-Feb-23 21:58 313,344 HelloCpp.exe 08-Feb-23 21:58 307,200 HelloWorld.exe 08-Feb-23 21:58 149,504 nrc.exe 08-Feb-23 21:58 401,408 Products.exe 08-Feb-23 21:58 <DIR> res 08-Feb-23 21:58 332,800 UrlImg.exe 15 File(s) 5,672,448 bytes More info about Windows compilation here: https://nappgui.com/en/guide/win_mac_linux.html#h1 More about configurations here: https://nappgui.com/en/guide/win_mac_linux.html#h4 In \build folder you have the NAppGUI.sln if you like Debug/Edit your code with Visual Studio. At the moment MSYS2/MinGW is not supported by NAppGUI build system. Visual Studio is the unique NAppGUI "official" compiler for Windows. You can also download Visual Studio Build Tools 2022. It will install the compiler, but not the environment. Is not necessary invoke msbuild directly. cmake --build ./build --config Release does it for you. Wow, thanks for answering so quickly. I followed this steps and now noted that it was generating for Ninja. Probably because I have installed msys2/mingw and is on the path. Also, with VStudio installed, it defaulted to mingw. All demos are now fine sized. Thank you. I hope that you add support for MinGW some day. MinGW support is planned for NAppGUI 1.5. Please follow these issues: https://github.com/frang75/nappgui_src/issues/5 https://github.com/frang75/nappgui_src/issues/48
GITHUB_ARCHIVE
Something in the sack wiggles. David Sever pauses, picks it up and begins to wrench open the knot, complaining that he tied it too tightly. He adds, however, that good knots help to keep reptiles from escaping in his vehicle. Knot loosened, a snake — tongue flicking — pokes its head through the opening. As Sever talks about the beauty of the green reptile with its black spots, he gently lifts it from the sack. The snake, one of a variety known for eating other snakes, wraps around Sever’s arm and appears to be unafraid of the man who found him that morning and slipped him into a sack. The snake moves up his arm while the Southeastern Louisiana University herpetologist talks about how king snakes range from the East Coast to the West Coast and from Canada to Mexico. Sever and Robert D. Aldridge, of St. Louis University, recently co-edited a new book called “Reproductive Biology and Phylogeny of Snakes.” Among its 17 chapters are some authored or co-authored by Sever, SLU biologists Brian Crother and Mary E. White, and graduate student Justin L. Rheubert. Written mainly for scientists and advanced biology students, Sever said others with an interest in snakes can “find much to learn from it.” It contains chapters on the evolution of snakes, which date to dinosaur times and now have an “incredible diversity” of about 3,000 species, Sever says. Sever and his students have traveled around the world to study many of those species. One of his students just returned from Costa Rica with a number of sea snakes for a study on their reproductive organs, which appear to be somewhat different than other snakes SLU scientists have studied using electron microscopy. Another of Sever’s graduate students is laboriously slicing up a python to determine more about its reproductive parts. Many of the snakes Sever has caught and studied since coming to Louisiana have been cotton mouth moccasins. When studying snakes, a large sample size is needed. Sever said that makes the cotton mouth a perfect snake to study in Louisiana because he not only can find lots of them, he also can locate one any month of the year. Though bitten numerous times by nonpoisonous reptiles, Sever said, he’s never been bitten by a cotton mouth or other venomous snake. After putting the king snake back in the sack, Sever moves to another area of the biology building where he provides water for a couple of large copperheads. Before he gives them the water, he moves them with a grabbing device that keeps his body out of their reach. “We used to catch them by hand,” he said of snakes used for research. Sever caught and first became interested in snakes much like any other little boy might. He and his older brother captured a garter snake as part of his brother’s work to obtain a Boy Scout merit badge. Later, Sever and his friends would go out and catch snakes for fun. As evidenced by the king snake in the sack, Sever still likes to catch snakes. The herpetologist said that later in the day, he planned to let the long, slender snake loose in his backyard.
OPCFW_CODE
Tuesday 15 November 2022 Anatolii Kmetiuk, Scala Center We’ve recently completed another successful Google Summer of Code program. Designed to bring more beginners into programming communities, the participation in the program is a part of Scala Center’s strategy to make contributing to the Scala language more newcomer-friendly. This year, we welcomed four students guided by five mentors. They contributed to projects such as Scala Native, Creative Scala, Scalafix, and Metals. In this article, you can find a short summary of what was done as well as the links to find out more. Scala Native: Linker Optimization Scala Native is an optimizing ahead-of-time compiler and runtime for Scala. It enables the programmer to compile Scala to native code that does not require a JVM for its execution. Scala Native is great for performance-critical applications – think embedded software – that require tight memory control while allowing all the type-safety and compile-time verification advantages that come with Scala. During GSoC 2022, Liangyong Yu, mentored by Wojciech Mazur, was working on making Scala Native faster and more memory-efficient. He introduced incremental compilation that reduced the build time by 21% on average. Also, he developed a benchmarking infrastructure to measure the performance of Scala Native builds. Considerable work was done on the optimizer of Scala Native which resulted in decreased memory consumption. Overall, Liangyong’s work makes Scala Native builds even more performant time- and memory-wise, which will be especially interesting for the developers of large codebases. You can find out more in Liangyong’s report. Scala Doodle is a compositional graphics library for generative art in Scala that enables users to declaratively create art pieces and other visualizations. A Doodle graphic is generally parameterized by one or more variables such as stroke width and background color, or sometimes something more domain-specific like the magnitude of gravity or iterations of a fractal. In any of these cases, the creator of a graphic may want to use a GUI for changing the parameters to fine-tune the art piece or provide a means to make the graphic interactive. Mikail Khan, mentored by Noel Welsh, developed a domain-specific language for describing GUIs during the Summer of 2022 – have a look at this documentation microsite for a quick demo. The project will be especially interesting for Scala educators. For experienced and novice programmers alike, it’s important to quickly get feedback on what your code is doing – this is how we learn. Even more so it is important for people who have just started their journey in programming. Having a visual feedback you can play with is a great feature that can be used to teach Scala to newcomers. The final report for this project is available here. Towards Scala 3 support for Scalafix ExplicitResultTypes: cross-compiling to Scala 3 Scalafix is a linting and rewriting tool for Scala codebases. ExplicitResultTypes is one of the built-in rules, which inserts type annotations for inferred public members. Unlike most rules that only rely on the Scalafix & SemanticDB APIs, ExplicitResultTypes also depends on the Scala presentation compiler, requiring users to run Scalafix with the Scala binary version targeted by the source files they want to annotate. As Scalafix is currently only cross-compiled to Scala 2.11, 2.12 and 2.13, it is not trivial to interact with the Scala 3 compiler and thus ExplicitResultTypes cannot run on Scala 3 source files at the moment. Razvan Vacaru, mentored by Brice Jaglin and Meriam Lachkar made a significant progress towards the goal of supporting Scala 3 in ExplicitResultTypes by cross-compiling all Scalafix modules to Scala 3. You can find a detailed report here. Semantic Highlighting in Metals Semantic Highlighting is a part of the default LSP (Language Server Protocol – what connects Metals to your favorite code editor to bring all nice developer experience to you) spec, but it is not yet supported by Metals. This would allow the highlighting of tokens (keywords etc.) based on semantic information about the code. This is especially useful with things like soft keywords in Scala 3, but not only. Semantic Highlighting is a long-standing feature request in Metals. Having a finer-grained, smarter code highlighting would enhance the developer experience, making Scala even more pleasant to work with across all the different editors. Shintaro Sasaki, mentored by Tomasz Godzik, has been working on the project during the Summer of 2022. Currently, the project is still a work-in-progress – you can follow its development at the following PR. The work done during GSoC 2022 makes Scala even more performant in certain areas, brings even more options for Scala educators to teach Scala, and enhances the Scala migration and development experience. The Scala community has gained four more contributors as a result of the program. We would like to thank the students whom we mentored this year – Liangyong Yu, Mikail Khan, Razvan Vacaru, Shintaro Sasaki – for their contribution to keeping Scala awesome. We would also like to thank the mentors – Brice Jaglin, Meriam Lachkar, Noel Welsh, Tomasz Godzik, Wojciech Mazur – for the time and knowledge they invested in getting the students up to speed with the projects and the community. Scala Center is intended to participate in GSoC 2023 as well! If you are interested in joining, either as a mentor or a student – keep an eye on our LinkedIn and Twitter for updates on the future installment of GSoC. You can also consult the timeline for 2022 to have some idea on the timeframes of the previous installment and have some idea on when to expect things to come into motion in 2023.
OPCFW_CODE
I've been trying now to make scripts for several different games, the install/uninstall functionality and so on and I've got some issues / suggestions / looking for best practises. 1. My first issue is that applications don't uninstall cleanly. Even though the wineprefix directory is deleted, many games install application icons to the desktop or the wine start menu that will not be removed if you use PlayOnLinux's "Remove" command. This is due to the desktop integration (winecfg -> Desktop Integration -> Shell Folder). a) Run the application's uninstaller before deleting the prefix itself. b) Delete these manually as part of some uninstall script. c) Disable the integration during install and just install our own shortcuts. 2. My second issue is that applications often install some support tools like Acrobat reader, which will get installed and reinstalled from different wine prefixes overwriting each other. In particular, some people would like Acrobat reader installed in their main .wine install and PlayOnLinux apps/games will overwrite the links. These should really remain unlinked only to be run if the game launches a pdf manual or such. a) Try to disable installation of these tools from POL scripts b) Disable the integration during install and just install our own shortcuts. 3. My third issue is that many applications seem to come with their icon embedded in the exe file, and there's no easily available/stable download location with the icon in a Linux-compatible format. It seems these icons are displayed properly when the shortcut is created inside WINE by the installer itself, but I can't find an easy way to make use of them in POL scipts. What I have found is that you can extract them using wrestool from the iconutils package and convert them to png using convert from the imagemagick package, but you can't assume everyone have those installed (I didn't). a) Convert once, try to upload somewhere on free hosting b) Modify the script to complain and make users install these packages c) Use some other/no icon 4. My forth issue is regarding using common utility scripts. For now, several of my scripts download the winetricks script, sets it as executable (chmod +x) and runs it to install directx, .NET and various other dlls a game might need. Some of those options require a Windows license etc. and I hope that's ok, it's nothing like cracks/no-cds or such but it's still legally a bit in the gray. Another question on whether this is an acceptable practise, if anyone were able to replace that script it gets run without any checks. Though I suppose that'd be the same with POL scripts, it might be safer and easier to include it as some sort of standard script in POL. As far as I can tell POL pretty much drops you in a bash script and you must do everything yourself. 5. This is more of an enhancement, I've noticed that winetricks does optional checksum verification on downloads and this may also be a good idea for POL. It would add a layer of extra security that the file downloaded is the same as the one the script creator used, so if it gets replaced by a hacker or something the script will no longer run it. Apart from that, I must say PlayOnLinux for the most part works great and is an easy way to keep multiple apps in their own wine sandbox. I think you can expect more scripts from me in the future...
OPCFW_CODE
Due to the COVID 19 pandemic, different sectors were affected differently, however it was the education sector which was hit the most. Being part of the affected community we empathise with other students. Certainly online learning has come to aid the situation. And this is the best option for the future of our planet as well. Some obvious benefits of online learning are its flexibility, wide range of selection of programs, cost efficiency and eco- friendly nature. However, we experienced some major flaws in the current online learning platforms. In any situation, education sector should not be compromised with. Since the growth of the whole world depends upon education, online learning platforms should be hassle free and easy to adapt to. Some existing flaws in online learning platforms include minimal interaction between teachers and students, frequent backlogs and disturbances due to internet connectivity issues, no efficient way of doubt solving, etc.. We tried to build a platform which solves these problems and helps students to get adapted to online learning. What it does Classaholic provides a one stop solution for all the problems faced in current online learning platforms. It provides a platform for live interaction between teachers and students, a common board with multiple user access to discuss doubts and work on group projects/assignments. For students with temporary internet issues, there is a tutorial section which can be used to learn a concept later, in case of an internet issue while the live class is going on. The special feature Capture the Class helps students in rural areas with very poor internet connectivity. Through this, a teacher can capture the whiteboard once it is filled and the image is converted so that it can reach easily to students with minimal internet usage. This way, students can learn through live notes and avoid backlogs. How we built it Basic UI has been built with React and Bootstrap as Front-End Frameworks. For discuss feature, canvas API has been used along with socket.io and peerjs. Canvas API is used for providing basic drawing features, socket.io for sending Realtime coordinates to all the joined users, peerjs which is built on top of WebRTC enables Realtime video conferencing. For record feature, mattdiamond is used.Capture the class uses Machine Learning frameworks python, numpy and opencv for first getting test data by first calebrating the board, it then converts it to grayscale and uses the edge detection, to find the contour (outline) representing the whiteboard being scanned. It uses perspective transform to obtain the top-down view of the document. Finally jpeg image compression is used for getting a final compressed image. Challenges we ran into We faced difficulties with the machine learning frameworks. We also faced some difficulties in getting adapted to some new technologies like socket.io. Accomplishments that we're proud of We actually solved a major problem that a lot of students are facing currently. This will encourage students in rural areas to adapt to online learning. What we learned We learned computer vision and image processing via this model. We trained neural networks to compare data and output and produce optimal image. We learnt how machine learning can be used for reducing an image size while maintaining its quality and clarity of text by grayscale conversion. What's next for Classaholic We plan to bring more accuracy to the machine learning model we have used. We would also work on the UI to make it more pleasing to the eye. We plan to incorporate a chat feature in the live classrooms as well. Log in or sign up for Devpost to join the conversation.
OPCFW_CODE
Loli tavi 59-16 Snips light mk vic. Snails T7 CC (Just for fun request) Soviet pony Tetris tetrarch Tog 2 hour ball pit Im up for making the Tog skin Jump to content I've actually tried to do exactly that using a modified event garage, but no matter what I try, I can't get any garage but the default to show up. very irritating. Nah, I can color the event garage all I want, the problem is that I can't get any garage but the default to appear. Something to do with where to put it and what to name it etc. Edit 9/17: i'd make another post about this, but that just seems wasteful, considering the slowness of this thread of late. I'm currently in the final phases of testing my newest skin, this time for the M40/43. After that, I'm going to avoid open-top vehicles for as long as I can (They are a PAIN), and work on an extra-special skin for the T110E4. Edited by hazard747, Sep 18 2014 - 01:01. Hey anyone got a link to a luchs skin? for scootaloo? Edited by skull_helmet, Sep 24 2014 - 00:36. It looks like there is one in RelicShadow's 'All in One' pack. ( http://forum.worldof...erhaul-package/ ) I think there is somewhere you can download it on its own.(?) Or you can download the entire German package if you choose. And I suppose if nothing else works for you, I might be able to find time to make one next week. Not really any promises though. Edited by SuperCannon6, Sep 23 2014 - 14:13. Finally, after weeks of silence, Hazard's tank emporium is back with another Fallout: Equestria themed tank. This time, it's a grenade-tossing stallion with a med-x addiction: P-21 *Note: while I believe the lighting issue to have been solved by essentially redoing the skin using the proper base, I have not yet verified this. In other news, there is a possibility that my recent type 64 skin may have accidentally been released in complete. I can now confirm that the type 64 "bass blitz/presto grazioso" shipped without text on the side of the undamaged vinyl skin. This issue has now been resolved. The link remains unchanged. Edited by hazard747, Sep 25 2014 - 04:41. Bweh? Is it too stressful to keep it maintained? On another note, my next tank skin is coming along well, though it's being a real nightmare to work with. Edit: this is probably a stupid question, but I can't get skin changes for the sport chaffee to show in the tankviewer or in-game. I'm thinking it's the directory: since I've never dealt with an HD tank before, I glossed over where to put HD skins. any help? Edited by hazard747, Sep 30 2014 - 05:41. 0 members, 5 guests, 0 anonymous users
OPCFW_CODE
Image Source: flibbr.com AJAX is a master tool for basics to design a website that is more dynamic. The idea behind AJAX file upload is to allow your users to upload files to the server without having to refresh the page. Facebook, Google, Google+, and other popular social networks today fully rely on the power of AJAX File upload to make file uploading an easy and a comfortable process for website users. Image Source: phrasemix.com PHP, MySQL form validation can be quite annoying. I will be honest with you; nothing makes a web user annoyed like having to type the same details repeatedly just because the server failed to validate their details. Unfortunately, most websites still ignore this bad web development habit and continue to implement it. For easy submission of forms, use the JQuery Library. It helps to validate the form, display errors if any and then submit the form back to a Hypertext Processor page without ever having to refresh the page. This is made possible by the GET or POST HTTP Request Methods. Have you ever been confronted by a situation where you keep creating many web pages because your project keeps expanding and needing more features? It is common. The truth is, JQuery Library eliminates the need to create many pages and reduces your web application to very few pages that you can manage with ease. The use of JQuery Modal, for instance, is a powerful approach to help compress those pages almost into what may seem like a single page application. This is one of the greatest web design trends in 2015. Imagine you have database table data of about 500 rows and want to display this to the user. Would you loop through all and display them all? Of course not. There are a couple of disadvantages with doing so. First, it causes the page load time to increase. Secondly, it annoys the users and you could lose site visitors. No one wants to scroll down a page with hundreds of records unnecessarily. JQuery Pagination is therefore the key concept to implement in this case. It tabulates your database table data in a manner than is easy for users to navigate. Include a JQuery Search even makes it better because users find only the data they are looking for on the spot. With JQuery Library, combined with PHP and MySQL functions, you can create a powerful image manipulation that helps to scale the images down to the right dimension before upload to the remote server. A good example is the image crop JQuery functionality, which allows users to trim their own images. Do not struggle building a video player for your website on your own. There is a JQuery Video Player Library Created by the JQuery team to help you create a video website with ease. Image Source: saggezza.com It is great when users can just drag and drop files from their local computers to your website and have them uploaded without ever having to press a button. This a modern web development technique that only Google and Eskimi, a Nigerian Social Network Service has put into use. The function is not hard to create and you should think of creating events to handle this if you are allowing users to upload files on your website. Adding and removing styles when necessary is possible with JQuery. Once you write the function, you never have to worry about the static Cascading Style Sheet because your JQuery Function will help you manipulate this for you. Good examples include adding classes to hide and show div element, and closing and toggling a multiuser real time chat windows like on Gmail chat. Image Source: webcssjquery.net Millions of websites now use JQuery animation because of the advantages. This function takes multiple images and displays them in an assigned div element after a given amount of time. When then function reaches the end of the loop (the last image element), it goes back to the first images and repeats the same process. The JQuery Animation function is mostly useful when you need to create a summary of your website through rolling (rollover) texts and images. You can include call to action links on this pages so that users are redirected to a give page or made to perfect a given action with the script. Last but not list, the JQuery Library can be used for database refresh. It is annoying when a user has to refresh a web page to see changes made on their accounts or on a website. Luckily, the AJAX, setInterval and JSON encode functions can be combined to create an auto refresh state so that registered users on a website can track the changes as they happen. These are just examples of the amazing job that you are can do with the JQuery Library. You can think of other uses, and create a good application that you can include on your website. You can also share those ideas with other developer so that they too learn some of those new cool effects that JQuery library can help them do. Image Sources: Freepik.com & Pixabay.com
OPCFW_CODE
How do spouses living in different states file taxes? My fiancee and I are thinking about getting married this year. Currently, we live together in California. My fiancee is going to attend law school in August of this year, in Washington DC. I will stay in California. She is likely going to have a very low income while she's a student. I expect my income to be between $180K - $200K/yr. I wanted to better understand what the tax implications would be if we got married. I know that there will likely be federal tax savings from filing jointly. However, I'm concerned about my income being taxed twice: once in California, and once in whatever state my fiancee resides in (ex: Washington DC). Is there a possibility that I'll be paying taxes twice on the same income? How would we even file our returns? Would the federal return be MFJ, while we each file separate returns in California and DC as MFS? You will generally never have to pay state income taxes twice on the same income. You might be taxed by two states for some portion of your income, but in that case, you would be able to claim a tax credit on one state's income tax return for the tax paid to the other state on that piece of income (up to the amount of tax you paid for it in the first state), so the net effect is you pay whichever is the higher rate of tax between the two states on doubly-taxed income. Generally, residents of a state are taxed by that state on their worldwide income, while nonresidents of a state are taxed by that state on only their income from that state. You are a resident of CA, and a nonresident of DC, while your future spouse wife will be a nonresident of CA, and a resident of DC (or at least she will be after her move; for 2020, she will be a part-year resident of CA (resident before her move) and part-year resident of DC (resident after her move); she will have to split the tax treatment accordingly). So CA would tax your worldwide income and your wife's CA income, and DC would tax your DC income and your wife's worldwide income. However, another complication is the fact that California is a community-property state. That means, since you are domiciled in California, your income after marriage is considered community income, and so half of it is considered your income and half of it is considered your wife's income. The half that is considered your wife's will be CA-sourced income of a DC resident, so it will be taxed by both CA (since it's CA-sourced income) and DC (since it's income of a DC resident). So this basically means that all of your income will be taxed by CA, and half of it will be taxed again by DC, and you will have to claim a tax credit for that half that is doubly-taxed. In the case of CA and DC (it may be different for other pairs of states), the tax credit for doubly-taxed income needs to be claimed in the person's state of residence, so in your case, on the DC tax return, since it's CA income of a DC resident (your wife). All of the above is true no matter if you file jointly or separately. Filing jointly doesn't mean that all of both of your incomes will be taxed by both states. When one resident and one nonresident (or part-year resident) file jointly, they are usually supposed to use the appropriate nonresident or part-year resident form so that only part of the income is taxed. For CA, a joint filing of a resident and nonresident uses the 540NR nonresident form, and taxes will be based on the portion of your joint income that is taxed in CA, and the rate of tax based on your whole joint income. For DC, I believe you would have to file separately if one is a resident and the other nonresident, but it is better for married couples to file separately in DC in general anyway (since DC has the same brackets for filing jointly and separately); your wife would claim the out-of-state tax credit on her separate DC return. I see. To make sure I understood correctly, would these be the returns we file: (1) We file a join federal return showing $200K of joint income (2) We file a joint 540NR showing $200K of joint income (3) I file a DC non-resident tax return showing no income (since all of my income is earned in CA and (4) Wife files a DC resident tax return, claim credit for taxes paid to CA Also, do you have to wait till the next tax year to claim the credit? Or can you claim the credit for CA taxes on the same years DC return? @horse-radish: That's basically correct. You claim it on the same year's return. I am not sure you need to file a DC return if you are nonresident. @horse-radish: In a more traditional state (not DC), you would typically file a joint nonresident return, with your full income as federal amounts, but only half of your income as that state's amounts, and then taking the credit for taxes paid to CA.
STACK_EXCHANGE
import unittest from drongo.request import Request class TestRequest(unittest.TestCase): def test_request(self): env = dict( REQUEST_METHOD='GET', GET=dict(hello='world'), PATH_INFO='/home', HTTP_COOKIE='a=b' ) req = Request(env) self.assertEqual(req.method, 'GET') self.assertEqual(req.path, '/home') self.assertEqual(req.query, dict(hello='world')) self.assertEqual(req.cookies['a'], 'b') self.assertEqual(env, req.env) def test_json(self): env = dict( REQUEST_METHOD='POST', GET={}, PATH_INFO='/home', BODY=b'{"hello": "world"}' ) req = Request(env) self.assertEqual(dict(hello='world'), req.json)
STACK_EDU
In the 3.10.0 release several new page navigation methods were added. I thought I’d say a few words about all of the methods, both new & old. PushPage / PopPage The Home Remote app manages a navigation stack for you. When you call PushPage it pushes your page onto the navigation stack and brings it into view. You can navigate back to your previous page by using the system’s Back button or by calling PopPage removes the current page from the navigation stack. The system’s Back button triggers the PopPage method internally so they both accomplish the same thing. PopPage have been around for a few years now & they’re not going away. They are meant to be used in conjunction with the new methods. OpenPage / ClosePage The latest release introduces the term “Opened” page. When you call OpenPage we will flag the page you’ve provided as “Opened”. With this we can help provide better management of the navigation stack for you. Here are some of the things it does: If you call OpenPagewhile the page is already “Opened”, it’ll simply ignore the request because it doesn’t have to do anything. If you call OpenPagewhile a different page is “Opened”, it will automatically close that previous page first & then open your new page. This means that at any point in time, only 1 page will ever be flagged as “Open”. If you call PushPagewhile a page is “Opened”, the new page will automatically be linked to the “Opened” page. Consider you called OpenPage(“MediaPlayer.xaml”) & once that page appeared you clicked on a button that called PushPage(“SoundModes.xaml”). “MediaPlayer.xaml” would still be the “Opened” page even though “SoundModes.xaml” is currently displayed on the screen, but they will be linked together. When you call ClosePageit will remove both of those pages from the navigation stack for you. This is a perfect example of when you would use a combination of both OpenPage & PushPage. You wouldn’t want to call OpenPageon “SoundModes.xaml” because you’d still like to retain the ability to go back to “MediaPlayer.xaml”. ClosePage can include an optional Page parameter. When a Page is supplied it will only execute if the name matches that of the current “Opened” page. When you leave the parameters blank, it’ll close the current “Opened” page, if there is one. As you can see, these methods are very similar to PushPage & PopPage. They are meant to help prevent misuse of the navigation stack. These will be especially useful when developing Scenes that perform automated page browsing. OpenDeviceDetails / CloseDeviceDetails These methods are identical to ClosePage. The only difference is that you’ll supply the Device Name instead of the XAML file name. Internally it’ll look up the DetailsTemplate you’ve assigned to the device & then supply that value to either Clears the navigation stack & replaces the root page with the XAML file you supply. Clears the navigation stack & replaces the root page with the automatically generated group page. The parameter you supply is the name of the Removes all but the root page from the navigation stack.
OPCFW_CODE
Is there a way to configure Emacs to never use a leading ~ whenever it needs to show me a path name? In almost every situation I come across (e.g. in *BufferList*, in the pre-populated argument prefix for find-file, etc., etc.), Emacs uses a leading ~ in the paths to any files or directories under my $HOME directory. I would like to stop Emacs from doing this1, and instead use the full explicit path, as in, e.g., /home/yourstruly/foo/bar/baz (and NOT ~/foo/bar/baz). Is there a way to configure Emacs to behave this way? 1 I work on many different systems throughout the day, each with a different value for $HOME. In my notes for work I need to refer to paths in one or another of these $HOME directories, and it would not be helpful if I were to use a leading ~ when writing down such paths. As a result I find myself constantly having to replace the leading ~ that Emacs insists on giving me with the actual value for the current $HOME. I want to save myself from this recurring annoyance. The question isn't clear. As you say, "..., etc., etc.". In what contexts do you want to show ~ converted to your home directory (explicitly)? Emacs has 8 zillion contexts where it might show you ~/.... There is no single option to always and everywhere have it show you what ~ stands for. Maybe that's all the answer that you wanted: there's no way to tell Emacs to always substitute for ~. (You could advise function abbreviate-file-name to have it not expand ~ anywhere. But that would no doubt wreak havoc on Emacs behavior here, there, and everywhere.) I think (setq abbreviated-home-dir regexp-unmatchable) might implement that approach without advising or otherwise overriding abbreviate-file-name. @Drew: Honestly, I just can't think of any context or situation where I would distinctly prefer Emacs' current default behavior here. At best I may be indifferent (e.g. in situations where (a) the ~-prefixed value remains completely hidden from me, and (b) the presence of ~ has no further visible effects downstream). Bottomline, I just can't come up with any useful restriction to what I wrote originally. Also, I confess that I am really surprised to learn that this design decision is so fundamental to Emacs' operation that tampering with it would lead to widespread malfunction... @Drew: I am inclined to try something along the lines of either your second comment (on abbreviate-file-name), or @phils' suggestion, but your remark about "wreak[ing] havoc" by this sort of thing does make me hesitate. Could you give one or two examples of the ways things could break if I were to fiddle with abbreviate-file-name's behavior? Go ahead and try, if you like. I don't advise it, however. I expect that existing code depends on abbreviated names, in multiple places. IOW, it's likely not just about your seeing such names; it's about code expecting and using such names. @Phil's suggestion is likely better than advising the function, in this regard. Are you running Emacsen on all the different systems and then trying to copy paths from them to another Emacs where you are editing your notes by copy-and-pasteing? If so, I would suggest a simple command that takes the path at point as argument, does an expand-file-name on it and copies it to your preferred copy-and-paste place, so that you can then paste it directly. That adds a keystroke, but there is no possibility of messing up anything (I suspect that @phils' suggestion will not mess up anything either, although reassuring yourself that it does not might take some work).
STACK_EXCHANGE
Message from discussion Entrance IDE Version 1.8.18 is up there! Received: by 10.142.202.18 with SMTP id z18mr352068wff.46.1285716717771; Tue, 28 Sep 2010 16:31:57 -0700 (PDT) Received: by 10.142.117.2 with SMTP id p2ls107898wfc.1.p; Tue, 28 Sep 2010 16:31:57 -0700 (PDT) Received: by 10.142.247.4 with SMTP id u4mr78421wfh.49.1285716717512; Tue, 28 Sep 2010 16:31:57 -0700 (PDT) Received: by u4g2000prn.googlegroups.com with HTTP; Tue, 28 Sep 2010 16:31:57 Date: Tue, 28 Sep 2010 16:31:57 -0700 (PDT) X-HTTP-UserAgent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-us) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5,gzip(gfe) Subject: Entrance IDE Version 1.8.18 is up there! From: Tod Landis <t...@dbentrance.com> To: Entrance <email@example.com> Content-Type: text/plain; charset=ISO-8859-1 There is quite a bit of new stuff in there. I'll get some examples the blog as time goes on, but meanwhile please feel free to ask here. The IDE version gets the new chart features a little bit ahead community version as a thank you for the support. Entrance Version 1.8 -- COPY CHART "Copy Chart" is now in the chart popup menu for copying one chart to the clipboard. To make its background clear, so that is under the chart shows through, use these commands: -- FONTSIZE, TITLEFONTSIZE These two new keywords have been added to set the size of the default title fonts in points: You can also set font properties using FONT (name) (style) (size) TITLEFONT (name) (style) (size) The font families available depends on the OS, but "sans serif", "monospaced" are always available. The font_style can be PLAIN, BOLD, Font sizes are specified in points. -- HTML #rrggbb COLOR SYNTAX HTML color syntax is now supported by the command line and servlet (The destkop version supports it as well, but syntax highlighting will -- IMPROVED LINE AND AREA DRAWING Line and area drawing have been improved and are faster. This blog The PLOT syntax document is now available in Spanish: as well as Japanese, French and Russian. To switch between languages the language code "es" with "en", "fr", "jp" or "ru". The previous (see below) show how to add PLOT syntax notes to the Entrance Help Please let us know if you can help with other translations! -- First Look: GEOMETRY Series Type XYChart, BitmapChart, and EarthChart now support GEOMETRY columns. are still experimental, but fun to play with. A geometry column can contain geometric primitives in "Well Known format. Here are some examples: LINESTRING(3 4,10 50,20 25) POLYGON((1 1,5 1,5 5,1 5,1 1),(2 2,2 3,3 3,3 2,2 2)) MULTIPOINT((3.5 5.6),(4.8 10.5)) MULTILINESTRING((3 4,10 50,20 25),(-5 -8,-10 -8,-15 -4)) MULTIPOLYGON(((1 1,5 1,5 5,1 5,1 1),(2 2,2 3,3 3,3 2,2 2)),((6 3,9 2,9 4,6 3))) GEOMETRYCOLLECTION(POINT(4 6),LINESTRING(4 6,7 10)) There is a good introduction to the MySQL "Well Known Text" extensions This are some pitfalls to be aware of: 1) GEOMETRY series types are only supported by XYChart, BitmapChart or EarthChart, so your script must begin with PLOT (one of those types). 2) MySQL binary geometry columns must be converted to text using 3) There is a bug in the desktop application that causes GEOMETERY to resize themselves with each repaint. // #468 error messages are delayed // #466 no frame color on rotated bar chart // #462 save as PNG makes chart go away when using PAGEBITMAP // #461 BlankChart should clear background to background color // #447 FRAME INSETS not working for earthchart // #446 Running multiple scripts from the tree does not report
OPCFW_CODE
Lu Jinnian raised a hand and pulled on his tie. He kicked the doc.u.ments to one side, then pulled his phone and called his a.s.sistant. "Mr. Lu?" came the a.s.sistant's voice through the phone. As if on a conditioned reflex, Lu Jinnian asked, "Do you know where Qiao Qiao is?" After he asked that question, Lu Jinnian came to his senses and realized, why would his a.s.sistant know where Qiao Anhao was? Just as Lu Jinnian was about to hang up and call Zhao Meng instead, his a.s.sistant said, "Miss Qiao? She's with Zhao Meng at ACR right now, eating j.a.panese food." At that instant, Lu Jinnian felt like something wasn't quite right, since he thought that his a.s.sistant shouldn't be aware of Qiao Anhao's whereabouts. His face immediately dropped. "How do you know where Qiao Qiao is?" The a.s.sistant who knew Lu Jinnian very well felt the jealousy seep through the phone, so he quickly explained himself. "Mr. Lu, ten minutes ago, I saw Zhao Meng send a message on her friends' circle. That's how I found out. I even gave it a 'like'." Lu Jinnian snorted, as though he was ready to hang the call. The a.s.sistant thought about how Qiao Anhao was hounded by the internet right now, and said, "Mr. Lu..." Yet how could he see their friends' circle, and Lu Jinnian couldn't? He then remembered that he hadn't been on WeChat since last year when he and Qiao Anhao parted ways on Chinese Valentine's day. Then his eyes fell onto the phone in the drawer. With that, Lu Jinnian halted the finger he was going to use to cut the call and interrupted the a.s.sistant. "Oh, that's right. Go to a branch for China Mobile, get them to make a new SIM for my old number, and take it to ACR." "Yes, Mr. Lu," replied the a.s.sistant. Then he wanted to continue with what he was just going to say. "Mr. Lu, Miss Qiao was..." "Doot-doot-doot..." The call was bluntly cut off by Lu Jinnian. Lu Jinnian didn't even change his clothes, heading straight back to his car in the same attire. Just as he got ready to drive off, the flowers coincidentally arrived. He lowered the car window, and the delivery boy handed him a card. "Sir, please sign for this." Sign for what... the person who's meant to sign it isn't at home... and she trashed the house... Lu Jinnian secretly wanted to swear, but he grabbed the pen and signed his name. Then he took the flowers and threw them to the back seat. When Lu Jinnian sped over to ACR, his a.s.sistant was already waiting at the department store entrance. As soon as he saw Lu Jinnian, he immediately handed him his new SIM card. Lu Jinnian put it into his phone and turned it on, whilst rus.h.i.+ng into ACR. He stood on the first floor in front of the map and found the j.a.panese restaurant. Then rushed over to the escalators. His a.s.sistant followed behind him and said, "Mr. Lu, I have something to report to you, it's about..." "If there's something, let's talk about it later." Lu Jinnian completely wasn't in the mood to talk about business. Right now, all he could think about was that woman and why did she hang up on him. "Mr. Lu, what I wanted to talk to you about was..." Inside, his a.s.sistant had already started to cry. Could he just finish what he had to say? He wanted to talk about Miss Qiao's issues. "Do you want me to repeat what I just said?" Lu Jinnian casually asked in response as he entered his WeChat pa.s.sword without even a glance at his a.s.sistant.
OPCFW_CODE
package online.kheops.auth_server.token; import javax.ws.rs.BadRequestException; import javax.ws.rs.core.Response; import javax.xml.bind.annotation.XmlElement; import static javax.ws.rs.core.Response.Status.BAD_REQUEST; @SuppressWarnings("unused") public class TokenRequestException extends BadRequestException { public enum Error { INVALID_REQUEST("invalid_request"), INVALID_CLIENT("invalid_client"), INVALID_GRANT("invalid_grant"), UNAUTHORIZED_CLIENT("unauthorized_client"), UNSUPPORTED_GRANT_TYPE("unsupported_grant_type"), INVALID_SCOPE("invalid_scope"); private String errorString; Error(String errorString) { this.errorString = errorString; } @Override public String toString() { return errorString; } } private static class TokenErrorResponse { @XmlElement(name = "error") private String error; @XmlElement(name = "error_description") private String errorDescription; public TokenErrorResponse() {} private TokenErrorResponse(String error, String errorDescription) { this.error = error; this.errorDescription = errorDescription; } } public TokenRequestException(Error error) { super(Response.status(BAD_REQUEST).entity(new TokenErrorResponse(error.toString(), null)).build()); } public TokenRequestException(Error error, Throwable throwable) { super(Response.status(BAD_REQUEST).entity(new TokenErrorResponse(error.toString(), null)).build(), throwable); } public TokenRequestException(Error error, String errorMessage) { super(Response.status(BAD_REQUEST).entity(new TokenErrorResponse(error.toString(), errorMessage)).build()); } public TokenRequestException(Error error, String errorMessage, Throwable throwable) { super(Response.status(BAD_REQUEST).entity(new TokenErrorResponse(error.toString(), errorMessage)).build(), throwable); } }
STACK_EDU
[squeak-dev] The Inbox: Chronology-Core-cmm.14.mcz asqueaker at gmail.com Thu Oct 18 19:38:53 UTC 2018 > I'm not sure this is right. I assure you, it is right. This is how Chronology has always worked from the very beginning. To jog your memory, have a look at versions of Timespan class>>#defaultOffset, and see the change you made to the comment. If that doesn't help, more background below... Background: We had a discussion similar to the "Elephant in the room" thread going on Pharo-Users way back in 2011. As a consequence, I introduced #makeUTC in order to have "globalized Dates", ones that were sure to compare equal with other Dates regardless of which timezone they were created. All it did was set the 'start's offset to 0:00:00 for all instances I created, so that it would eliminate the timezone from being a factor when comparing dates (while vastly improving hashing performance). To implement that, a #defaultOffset was introduced and used when creating instances of Date, Month and Year. The old #starting: constructor is still there for the original (TZ-specific) Dates, but the default offset to use became defined by #defaultOffset, which was The #noTimezone was borne from you wanting to make the fact that "we don't want no stinkin' timezone in our Date" more explicit. Instead of a defaultOffset of 0:00:00, you changed it to nil and then put nil checks in all the appropriate places. That way, there is no doubt that those are globalized Dates | Months | Years. We did not do anything else. The original Timespan behavior, which DOES have valid TZ-specific use-cases, is still available via #starting: and always was. There is no need to remove it since we have what we need in TZ-independent Dates via #defaultOffset. > The current implementation comes from the "trunk thinks its tomorrow" discussion starting on 17 Feb 2016. As a result we introduced the "noTimezone" notion: a date without timezone compares equal to the same date in any time zone. At some point this made all the tests green. Actually, that thread is about the bug that got introduced when the new utcMicroseconds thingy went in. It is totally unrelated to > - Bert - > On Wed, Oct 17, 2018 at 8:53 PM Chris Cunningham <cunningham.cb at gmail.com> wrote: >> I like this. >> Although I notice that the comparison makes sure that the classes are the same - so a Timespan starting at the exact same time as a Date and with 24 hour duration is not the same as a Date with the exact same values. >> This would allow further speedups if Date, Week, Month, and Year if we wanted - just drop the comparison of duration in those classes. (The general case is still needed to tell Timespan apart from Schedule.) >> If you did that, you probably also want to re-implement hash as well - simplifying it the same way. >> On Wed, Oct 17, 2018 at 8:07 PM <commits at source.squeak.org> wrote: >>> Chris Muller uploaded a new version of Chronology-Core to project The Inbox: >>> ==================== Summary ==================== >>> Name: Chronology-Core-cmm.14 >>> Author: cmm >>> Time: 17 October 2018, 10:07:14.485014 pm >>> UUID: 5d93900c-4a4e-4a4e-80f3-800dbdb07d0b >>> Ancestors: Chronology-Core-tcj.12 >>> - A fix and optimization of Timespan>>#=. Both elements being compared must have the same timezone (or same state of #noTimezone) in order to take advantage of the optimized #hasEqualTicks: comparison. Otherwise (if different timezones), a full comparison of their starts (via #=) is needed. >>> - There was a mention of this optimization put into the class comment. This level of detail may be a bit tedious for users to read at that level, so Brents original comment was restored. >>> =============== Diff against Chronology-Core-tcj.12 =============== >>> Item was changed: >>> Magnitude subclass: #Timespan >>> instanceVariableNames: 'start duration' >>> classVariableNames: '' >>> poolDictionaries: '' >>> category: 'Chronology-Core'! >>> + !Timespan commentStamp: 'cmm 10/17/2018 22:00' prior: 0! >>> + I represent a duration starting on a specific DateAndTime.! >>> - !Timespan commentStamp: 'bf 2/18/2016 14:43' prior: 0! >>> - I represent a duration starting on a specific DateAndTime. >>> - If my start has an offset identical to my #defaultOffset then comparisons ignore timezone offset.! >>> Item was changed: >>> ----- Method: Timespan>>= (in category 'ansi protocol') ----- >>> = comparand >>> + ^ self class = comparand class >>> + and: [(((self noTimezone and: [comparand noTimezone]) or: [self start offset = comparand start offset]) >>> + ifTrue: [ self start hasEqualTicks: comparand start ] >>> + ifFalse: [ self start = comparand start ]) >>> + and: [ self duration = comparand duration ] ] >>> - ^ self class = comparand class >>> - and: [((self noTimezone or: [ comparand noTimezone ]) >>> - ifTrue: [ self start hasEqualTicks: comparand start ] >>> - ifFalse: [ self start = comparand start ]) >>> - and: [ self duration = comparand duration ] ] More information about the Squeak-dev
OPCFW_CODE
Personal Pages Graphics Programming Resume. Game, Artificial Intelligence projects. Real time software rendered computer graphic and demo effects. Screen shots, free samples and downloads. Top: Computers: Programming: Graphics: Personal Pages - Codito, Ergo Sum - Graphics programming tutorials, source code and free downloads. - Doucette, Jason - Resume. Game, Artificial Intelligence projects. Real time software personal pages rendered computer graphic and demo effects. Screen shots, personal pages free samples and downloads. - Klosiewicz, Przemek - SweetSheep Homepage - About. Graphics programming, Mandelbrot - Julia set explorer, RayMax - Raytracer, OpenGL. Photos. - Forbes, Kevin - Research info, downloadable graphics/vision source code, and personal programming home page. - Harris, Kevin R. - CodeSampler - Several introductory OpenGL and Diretc3D samples, with many written in programming both for comparison. More advanced samples emphasis vertex and pixel programming shader development using nVIDIA\'s new Cg language. - Spontz Scene Group - A group of computer-enthusiasts oriented to audiovisual production programming using low programming level and realtime techniques such as programming OpenGL or 3dfx mainly programming with Macintosh, Windows, Linux programming and BeOS. - CG References and Tutorials by Malcolm Kesson - A collection of reference and tutorial pages relating to RenderMan, personal pages Maya/mel, Pixar slim, Houdini, Tcl, Shake scripting and C. Also personal pages provides a free text editor for Linux, Win and OSX. - Pentayya, Krishna - K Zone - Repository of 3D models. VRML tutorial. Algorithms, projects and papers. - Roel z'n Boel - Homepage, hosting personal projects related to programming in personal pages C++ and computer graphics (raytracing and real-time). - Strulovitz, Nir - Ports (="translations") of Denthor\\'s classic graphics programming tutorial personal pages for graphics 32-bit compilers: Free Pascal, Virtual Pascal, OpenWatcom personal pages C, Gnu graphics Djgpp C. Snapshots taken from Windows personal pages and DOS animation graphics graphics demos. - Mizanin, Marek - Zanir - 3D graphic programs using OpenGL or Direct3D with source code graphics (C++). - Gruaz, Raphael - Offers CV, projects (DirectX, Vertex Shader, Raytracing, C++, Java, Web), information on travels, and photos. - Elias, Hugo - In-depth computer graphics programming articles and examples. Reviews of graphics/programming and other books. Links. - Bujnak, Martin - Computer graphics, realtime 3D engines with dynamic elements like lights programming and shadows. - James, Grant - Zeus CMD - Tutorials on C++, Java, Photoshop, 3DMax, Visual Basic.NET, personal pages HTML. - Boissard, Eric - Real-time ocean waves rendering - Real-time deep-ocean waves simulation. Rendering using vertex and pixels shaders. Portfolio. - Saeed - About. Gallery custom programs and 3DS MAX generated programming 3D graphics. MySQL - Cache Direct
OPCFW_CODE
Novel–Let Me Game in Peace–Let Me Game in Peace Chapter 1186 – River of Forgetfulness squirrel unkempt “I don’t know, but coming from the appearance than it, you can only visualize a way to handle it if we desire to look for the Three-Existence Gemstone,” An Sheng reported because he stared within the river. “Safety initially,” An Tianzuo explained. what did mazzini believe in Having said that, Zhou Wen and Li Xuan ended up alarmed every time they saw the officer’s overall look. The official saluted a bit before jogging to the Stream of Forgetfulness. Having said that, to Zhou Wen’s surprise, he didn’t summon his Mate Monster. As a substitute, he went towards River of Forgetfulness themself. The officers under An Tianzuo produced a few more tries, however the result was the identical each and every time. Regardless of what variety of Friend Beast it was—regardless from the attributes or type—as lengthy because it handled the River of Forgetfulness’s smoke cigarettes, it could immediately plummet in the stream without using a trace. It wouldn’t even cause a ripple. With listening to An Tianzuo’s sequence, Jia Nong retreated and still left the Stream of Forgetfulness. Following hearing An Tianzuo’s order, Jia Nong retreated and left behind the Stream of Forgetfulness. Seeking the Three-Life Stone was more complicated than they possessed thought. At the moment, really the only man or woman who could get into the stream was Jia Nong, but he couldn’t withstand the blood vessels-decorated fingers. The instant the individual stepped out, his military services standard seemed to be drawn by some impressive concealed compel. It was actually chiseled from his body system and fallen down into the River of Forgetfulness as well as his armed forces shoes or boots, armed service cover, and mask. annie besant autobiography “What dimensional creature’s Mythical Serum did this individual use? How appealing,” Li Xuan explained when he considered the police officer in big surprise. The moment whomever stepped out, his military standard appeared to be pulled by some highly effective hidden power. It was sculpted from his human body and decreased into the Stream of Forgetfulness along with his military boots, armed forces cover, and cover up. With seeing and hearing An Tianzuo’s purchase, Jia Nong retreated and eventually left the River of Forgetfulness. nothing sacred david allan coe “Commander Jia, only it is possible to go into the river now. I can only require you will make another escape to lure it out,” An Tianzuo believed to Jia Nong. The Disturbing Charm “I don’t know, but from the appears to be from it, we will only imagine a technique to fix it if we desire to locate the Three-Existence Rock,” An Sheng stated as he stared for the stream. It absolutely was difficult to even enter the Stream of Forgetfulness, a smaller amount hunt for three of the-Lifestyles Material within the stream. Lives of Eminent Zoologists, from Aristotle to Linnaeus Nevertheless, Zhou Wen and Li Xuan were actually alarmed after they noticed the officer’s visual appearance. how much did belle make from bath water Not one person could answer him because none of us believed the best solution. The bright-haired granny guarding the link might have in mind the solution, but she didn’t say a word. “Proceed.” An Tianzuo nodded a bit. Finding the Three-Existence Rock was more difficult than they got thought. At this time, the only individual who could enter in the stream was Jia Nong, but he couldn’t endure the blood stream-coloured hand. Even so, in the blink of an eyes, the scattered sanguine aura condensed again, converting to a blood flow-coloured hands that drilled into the River of Forgetfulness. He went very carefully. It may be identified as simply being mindful, but also for him to enter the River of Forgetfulness was obvious madness. With the fast he retreated, the yellow smoking erupted where he was standing upright. A our blood-pigmented palm long out. The official saluted a bit before going for walks towards the River of Forgetfulness. Having said that, to Zhou Wen’s amaze, he didn’t summon his Companion Beast. Rather, he went towards Stream of Forgetfulness themselves. Chapter 1186: Stream of Forgetfulness Zhou Wen recognized that officer wasn’t a beast. He acquired only utilized the Mythical Serum made from an extraordinary dimensional being. His physique acquired mutated with a advanced level, allowing him to be in such a declare. “Yes sir.” With no reluctance, Jia Nong came into the Stream of Forgetfulness yet again. An Sheng discussed, “That’s Commander Jia Nong of 413 Job Power. The Mythical Serum he utilised was designed using a weird Mythical being that shattered away from Chess Mountain peak. His physique mutated significantly, but he also obtained a very unique Mythical power. Out of the seems from it, his strength can restrain the power of the River of Forgetfulness.” The officer’s system will no longer looked just like a human’s. There had been no flesh or our blood upon it. The things they spotted was actually a humanoid grey light up. As he shifted, the smoke cigarettes ebbed, helping to make him start looking extremely peculiar. “Overseer, I’ll decrease and look.” Jia Nong discovered that points were definitely because he had thought. The River of Forgetfulness’s weird electrical power was unproductive against him, so he looked for An Tianzuo’s permission. “Yes,” Jia Nong clarified and was about to fly around the Stream of Forgetfulness. Zhou Wen and Li Xuan also summoned some lower-degree Friend Beasts and manufactured them try to approach the River of Forgetfulness. Nonetheless, much like An Tianzuo and company’s endeavors, no Associate Monster could make it from the River of Forgetfulness. The officers under An Tianzuo designed some more attempts, however the consequence was the same anytime. No matter what form of Associate Monster it was—regardless from the qualities or type—as lengthy since it approached the River of Forgetfulness’s light up, it is going to immediately plummet within the stream without a locate. It wouldn’t even develop a ripple. Jia Nong sensed a ongoing fear. Even though his physique wasn’t scared of regular bodily conditions, the blood vessels-colored palm got only swiped against his system once and had dissolved most of the gray fog over his entire body. If he obtained really been caught, he would definitely have vanished into very thin air flow. doctor who the caves of androzani cast “I spotted some thing. I can give it a shot,” the specialist solved. The officers under An Tianzuo produced a few more tries, though the outcome was the same anytime. Regardless of sort of Partner Monster it was—regardless with the capabilities or type—as longer the way it approached the Stream of Forgetfulness’s cigarette smoke, it could immediately plummet to the river without having a trace. It wouldn’t even produce a ripple. Section 1186: Stream of Forgetfulness children’s internet protection act (cipa) rulings Jia Nong believed a nasty dread. Despite the fact that his body wasn’t frightened of standard physical episodes, the blood vessels-tinted palm possessed only swiped against his system once along with dissolved a lot of the grey fog over his body. If he acquired really been caught, he would most likely have vanished into thin surroundings. Novel–Let Me Game in Peace–Let Me Game in Peace
OPCFW_CODE
Developer Console isn’t just for Developers, it’s a great tool for Salesforce Admins too! Learn why the Developer Console is a great feature to have in your Admin toolbox to generate logs in real-time, show more detailed error messages, update data issues on the fly, and more. Let’s first introduce at what the Developer Console is and how to access it. Introductions: Meet the Developer Console The Developer Console is the native Salesforce “IDE” (integrated development environment). Great! Wait, what does that mean? An integrated development environment is a handy user interface that pulls together useful development and debugging tools all in a single place. Think of it as consolidating and extending some of the development and troubleshooting features you usually access via Setup. The Developer Console can do a LOT of things, but for our purposes, here are the key abilities you want to know about as an Admin: - Generate logs in real-time - Show more detailed error messages - Troubleshoot order of operations issues - Show when governor limits are being hit - Explore your data quickly - Update data issues on the fly How to Access the Developer Console How you access the console depends on your UI and your Org, but once the console opens in its own window, the look and feel is the same both in Lightning and Classic. Classic without a Global Header Classic with a Global Header Real-Time, Quick Debugging with Debug Logs One of the powerful features of the Developer Console is the debugging capabilities. Debugging is the action of finding issues in the system. A debug log is a log of what’s happening in the system, including error messages, automation that’s being triggered, and how close everything is to governor limits. If you’ve ever scheduled a debug log in Setup, you know how cumbersome it can sometimes be — to go to Debug Logs, schedule the log, log in as the user you’re troubleshooting as, test, log back out, comb through all the logs (you know the pain)…. One of the great things about the developer console is that it starts debug logs for you automatically when you open it. Say goodbye to the days of going and manually setting up your own logs and then realizing they’ve expired halfway through troubleshooting! Developer Console also generates the logs immediately, letting you easily isolate the log you need to look at for any given action. You can also easily schedule logs for users and then, while you test as them, see the logs generated in real time! You can quickly check the logs from Developer Console and continue testing and troubleshooting as needed. This greatly expedites troubleshooting and makes it easy to isolate issues. Let’s look at that step by step. Say you have a user named Amy. Amy has complained that she’s hitting an error when she tries to save certain Accounts. Some Account records save fine, but others throw an error. Let’s walk through the steps of troubleshooting using the Developer Console. Step 1: Open the Console Open the Console while logged in as your Admin user by following the steps outlined above. A new window should pop open. Step 2: Troubleshooting Before you try logging in as other users to test, you can test while logged in as yourself to generate logs. This step is pretty easy, because all you have to do is open the console and it does the rest for you! We’ll explore reading the logs and filtering them in Step 5. Step 3: Set up a log for another user Sometimes you have to troubleshoot as another user because you’re unable to recreate an issue as yourself. Here’s how to login as another user (in our case as Amy) to troubleshoot further. Navigate to Amy’s User record so you can do two things: - Get her user Id, and - Log in as her for troubleshooting. In the Developer Console, click Debug > Change Log Levels… Scroll down to User Tracing for All Users. Click the Add button. A window will appear. Paste in the User Id of the User you want to debug (in our scenario, Amy’s Id). Tip: you can get the User Id from the URL of the user record. It will be 15 characters and start with 005. Almost there! Change the Start Date and Expiration to the start and end times you want by clicking on the fields. Then set your Debug Level by clicking the Add/Change link. Click Done. Then make sure that the Debug > Show My Current Logs Only setting is unchecked. You want to see all users’ logs when you’re troubleshooting other user’s actions. Now you’re ready to troubleshoot as Amy! Step 4: Troubleshoot as another user Now, leave the console open and log in as the user you need to troubleshoot as (in our case Amy). Even though you’re logged in as Amy, the console will continue to run in your System Admin context, giving you full access to all the debug logs being generated. Walk through the steps Amy is having issues with to generate the error logs. When you are ready to review a log in the developer console, make sure the Logs tab is selected, then doubleclick a row to open the log you want to view. Tip: use the Status column to quickly isolate the log with error messages, but remember that sometimes things will silently fail and the log may say Success but there’s actually an issue logged inside. Step 5: Review and filter logs Logs can be huge! And they can be tedious to read through. Thankfully, developer console lets you filter and search the logs for specific tags, letting you isolate information more easily. You’ll notice several preset filters and a freeform search bar at the bottom of the log. |Executable||Only the lines about automation executing, like flows or code.| |Debug Only||Show only debug statements (typically in apex code).| |Filter (and search bar)||Type your own freeform text to filter log, but be careful because it’s case sensitive!| Lastly, you can customize the columns that you want to view by clicking on the arrow in any column header of the log. Investigate Data like a Pro with Query Editor Another way to troubleshoot issues (like the one Amy is having, for example) is to check the data. Maybe it’s not an issue with automation or user mistakes, but a data problem. Developer Console can help with that too! Step 1: Open the Console Follow the steps outlined above to open the Developer Console. A new window should pop open. Step 2: Select the Query Editor Tab Step 3: Open the Salesforce Object you want to query (no, not in Object Manager) You could type everything freeform into the query editor, but sometimes you’re not sure exactly what fields you want to look at! Typing them all out is also tedious. Developer Console lets you actually open an object and look at all the fields, then select those fields for your query. In the Developer Console, click File > Open. In the window that opens, select Objects from the Entity Type list, select the Object in the Entities list, then click the Open button at the bottom left of the window. Once you have the object open, you’ll see a full list of fields — Standard then Custom — along with the field’s data type. Decide which fields you want to query, hold down Ctrl (Windows) or Command (Mac) and then click to select all the fields. Click Query. Step 4: Filter your query If you have more than just a handful of records on the object, you’ll definitely want to filter them out! If you’re new to editing SOQL, Salesforce Ben has a great cheat sheet for you! To filter, you’ll use WHERE followed by criteria. To practice, let’s find Accounts with the Type = ‘Customer – Direct’ Step 5: Using query output This is where the fun begins! After your query executes, you should see something like the screen below. Let’s take a quick tour. - Workspace Tabs: Kind of like Salesforce Console Apps, Developer Console has tabs that will remain open for everything you do or view. You can quickly refer back to your open Objects, compare the results of queries you’ve previously run, or compare debug logs, all in the same console screen! - Executed Query: This space keeps the query you executed for this tab, so you can always remember the criteria you used to get the output. - Query Results: This is the actual output of your query, with the fields in the order you queried them. - Actions: Here you can refresh the query (like refreshing a report) and actually make data changes directly from the console. As long as you have the Id in your query, you can double check on fields and update them via the developer console directly, saving you time performing quick data corrections. (Tip: As a rule of thumb, always add Id to your queries just in case you need to make edits to the records!) - Access in Salesforce: Here you can take actions within Salesforce. Click on the record you want to access in the query results and then you can open the detail page in Salesforce or jump right to the Edit page. You can also jump to the object’s New page to quickly create a new record. This was just a quick introduction to troubleshooting with Developer Console, but there are a lot more things it can do! Trailhead has several hands on modules you can explore to get practice using the developer console. Developer Console Basics and Apex Basics for Admins are both great places to start! There’s also a webinar recording about Developer Console for Admins with live demos, Q&A, and a deeper dive into the Admin-focused features we explored in this post.
OPCFW_CODE
From Audacity Manual Truncate Silence automatically reduces the length of passages where the volume is below a set threshold level. Throughout this description the words "silence" and "silent" mean sounds that are below the Threshold setting. Threshold for silence Audio at or below this amplitude will be regarded as "silence", so will be truncated. Ignore silence less than Specifies the shortest length of silence that will be truncated by the effect. Silent passages of this length or greater will be truncated. Silent passages of less than this length will be left unchanged. Compress silence by A compression factor which proportionally reduces silences in the waveform that are longer than the "Ignore silence less than" length. Compression is only applied to that part of the silence that is in excess of the ignored duration, so for the default compression factor of 4:1 that "excess" silence would be compressed to a quarter of its original length. This setting has no effect if the "truncate to" length (below) is the same as or less than the "ignore" length. and then truncate to After any silence compression is applied as above, truncation is applied to the entire resulting silence. The "truncate to" length specifies the longest allowable resulting silence in those silences that are truncated. - Setting the "truncate to" length to the same as the "Ignore" length will always reduce the truncated silences to this length. - Silences longer than the "truncate to" length will remain if they were ignored by Truncate Silence because they were shorter than the "Ignore" length. - If compression is applied, truncated silences may be shorter than the "truncate to" length. - Simple usage: Setting both the "Ignore" and "truncate to" lengths to 5 milliseconds (ms) will truncate the silence to 5 ms. This is less than the length of a detectable silence, so will effectively eliminate it. - Truncate length only: Set the "Compress silence by" factor to 1 (effectively disabling it). Now any silence longer than both the "Ignore" and "truncate to" length will be reduced to the "truncate to" length and never any less than that. - Proportional length only: Set the "Truncate to" length to some large value, like 1000000. Now that part of any silence greater than the "Ignore" length will always be compressed by the "Compress silence by" factor. - Proportional truncation with compression factor: The resulting silence is calculated according to the following formula: - (output silence length) = ((ignore length) + ((input silence length) - ignore length))/compression) - with the constraint that output length can't be more than the "truncate to" length. So, setting the minimum to 33 ms and compression to 5:1, a silent passage 1033 ms long would be truncated to 233ms (33 + (1033-33/5)), unless "truncate to" was set to less than 233 ms (in which case truncation would be to that "truncate to" length). As a real world example, setting the minimum to 100 ms, the maximum to 5000 ms and the compression factor to 4:1 will have the effect of doubling the speed of a speech track with no pitch change, while keeping about the same cadence as the original. Truncate Silence only removes audio, it does not reduce or eliminate noise in the silent sections that it keeps. |Avoid using Truncate Silence on selections which have fade outs or fade ins, since it will remove the quietest part of fades. If you need to add fades, apply Truncate Silence before adding fades.|
OPCFW_CODE
Update: as of Wednesday morning, 1/8/14, the class is full with 40 people signed up. Which is awesome! (Yeah, we know, it says 18. Not everyone registered via meetup.) Some spots may open back up. If you would like to be notified if/when that happens, sign up for the waiting list here at Meetup. If you've registered here at Meetup but have not paid via PayPal, you should have received a message on the evening of Wednesday evening with some instructions. We're holding your spots for now, but those seats will be released for other people if we haven't got payment by Saturday 1/11/14. STL Python and Fort Pedro Informatics (http://fortpedro.com/) are holding a five-session Introductory Programming class at Nebula (http://nebulastl.com/) in the heart of the Cherokee Street Business District (http://www.cherokeestation.com/). This class is for total beginners with an interest in learning to program, but no prior programming experience. Python (http://www.python.org/) was developed as a programming language for teaching, but it is also used for high-productivity development in organizations such as Google, Dropbox, NASA, Rackspace, and several web sites you probably use. Classes are every other Wednesday from January 22 to March 19, 6:30 pm to 8:30 pm, with substantial snacks (pizza, etc.) and non-alcoholic beverages provided. On the Wednesdays that fall in between class meetings, instructors will be available in a Google Hangout for "office hours", in case you get stuck and need some help. The first meeting will focus on getting to know your classmates, and getting Python installed and running on your laptop. (Windows, Macintosh, Linux, etc. — we’re flexible.) Important: The cost for all five classes is $25.00, which covers the cost of food and space rental. In addition to RSVPing here, please register at http://fortpedro.com/classes/intro-to-programming-with-python-january-march-2014/ . Update: since Meetup's not super clear on what happens on what date: • Welcome Session – January 22, 2013 • Office Hours January 29, 2013 • First Class February 5, 2013 • Office Hours February 12, 2013 • Second Class February 19, 2013 • Office Hours February 26, 2013 • Third Class March 5, 2013 • Office Hours March 12, 2013 • Last Class March 19, 2013
OPCFW_CODE
5 Reasons The Microsoft Zune HD Will Contend With The Apple iPod By Brian Kraemer, ChannelWeb 12:11 PM EDT jeu.. sept.. 17, 2009 Competing in the mobile music field inevitably means competing against Apple (NSDQ:AAPL)’s wildly popular iPod in all of its various incarnations, but Microsoft (NSDQ:MSFT) is working to differentiate itself from the rest of the field with its Zune HD. To challenge Apple’s iPod for the mobile music device market crown, the Microsoft Zune HD is bringing back some of its favorite features from previous iterations while adding some important new functionality to entice new users. High Definition Is King High definition is an important enough differentiator that Microsoft put it in the name. Sure, the touch-screen device is a music player, but video on the go is becoming more and more important to consumers. And in the home, HD television and clear images are important. That’s why Microsoft pulled out all the stops and put Nvidia’s Tegra system-on-a-chip platform — which the chip maker calls the “world’s first HD mobile processor” — in the Zune. The mobile device from Microsoft is built to show movies and television shows in 720p. Microsoft is pairing an online content store with the Zune HD, something Xbox owners are already familiar with. While the offerings for the Microsoft Zune HD are still paltry compared to Apple’s iTunes store, it is a crucial step for Redmond if they are serious about competing against the incumbent iPod. Another important feature for the Zune HD is its ability to translate content from one device to another. If a Zune user purchases a movie in HD, it will be available on the Xbox at home or even on a Windows Mobile device. Unlimited Music Service One thing that does differentiate Microsoft’s Zune HD from Apple’s line of iPods is the way users can consume music. Unlike the iTunes store, which requires paid downloads for seemingly everything, Microsoft users can pay a flat fee of $15 a month and have unlimited access to streaming music. The problem with the subscription model, of course, is that once you unsubscribe your music library is lost. At least, that was the case, until Microsoft decided to allow users to keep 10 songs a month as part of the streaming music deal. It’s not a huge amount, but it is at least a start. Microsoft will still have a built-in radio tuner that owners of previous versions of the Zune will recognize, but with a twist. In addition to getting the regular terrestrial channels that appear on the radio dial, the Zune HD is taking HD to the radio as well. That means there’s more content available for customers to consume, all from the mobile device. There Is A Browser — Sort Of Microsoft isn’t unaware of the fact that mobile devices are continuing to converge as consumers become less interested in having cluttered pockets and are more interested in a single device that suits all their needs. To that end, Microsoft equipped the Zune HD with a Web browser which, from most reports, is basic and serviceable at best. The iPod and iPhone were released as mostly polished devices with functionality that is simple and intuitive. Microsoft’s Zune HD isn’t quite there — yet. But the fact that the browser is included is encouraging and points to the potential of the mobile device.
OPCFW_CODE
Setting a static IP address to a guest machine if the host also has a static IP I have Centos7 VM installed on VirtualBox and it works fine, but when I use the command ip address I get the IP as 192.168.0.X. The host machine (Windows 10) is on 192.168.100.X and I need the guest to be on the subnet 100.X, visible to all its hosts and PCs. I've searched for a solution and the reason seems to be that the host is configured on a static IP address, I can't change it to DHCP because it's mandatory by my company, but I've acquired a static IP I can use for my machine and I need to set it up, I tried may solutions on the internet but nothing seems to help. I'm used to setting the adapter as a bridged adapter but I guess this isn't feasible here, what can I do? I have the interface on Bridged adapter mood and I've never had this issue before with the same appliance when I tried it on a different PC. I tried installing the machine and VBox many times but without success, what is the issue? Edit: So I played with both Virtualbox's and the VM's configuration, assigning <IP_ADDRESS>/24 to the VirtualBox adapter (DHCP server settings) VM: eth0: NAT eth1: host only adapter The VM's IP is changed to <IP_ADDRESS> and I can ping the VM from the host, but can't ping the host via the VM <IP_ADDRESS>/24 really should be avoided, as all cable modems use this subnet, running their web server on <IP_ADDRESS>. Have you tried assigning a different IP to that adapter? The proper way to do what you want is to set from Host the Guest's adapter to bridge mode so the guest can get IPs from the same network segment as your desired network. All your guests will then be able to use IPs from your network directly. If you want not to use your main network's IPs, just put the 2 machines in the same network segment (like 192.168.0.X/24) and make sure you have the correct settings so they communicate with each other. Originally I configured the machine on bridged adapter mode as I always do, but it gives the IP as 192.168.0.X So what's wrong with that ? It's not good for us, we use the VM to access an application using it's IP address, right now it's not visible over the network it's only visible on the local host As I said, if you want access from main network, bridge them and use main network's IPs. It does not matter if they are static or DHCP as long as they belong to your network's segment.
STACK_EXCHANGE
I'm L?onie Watson. I'm an accessibility engineer at The Paciello Group (TPG), and accessibility consultant working on Gov.UK. Through TPG I do a lot of work with the W3C, where (amongst other things) I'm co-chair of the Web Platform Working Group. I blog on Tink.UK and sometimes write for Smashing Magazine, SitePoint.com and others. How did you become a digital consultant, what is your background? I'd been interested in computers through the 1980s as a kid, and began playing on the web as it began to take shape in the mid-1990s. At the time I was a drama student and didn't have any intention of working in technology, but chance resulted in me working for one of the first ISPs in the UK in 1997 and from there I taught myself how to create websites. I lost my sight in 2000 and when I went back to work it was as a consultant for a digital agency called Nomensa. They specialised in digital UX and accessibility, both of which were still new things in those days, and with a background in web development I found the technical aspects of accessibility very compelling. Where did you study? Looking back, would you recommend your path to beginners? My first undergraduate degree is in performing arts, my second is in computer science. If you talk to a lot of people who were around during the early days of the web, you'll find they come from a wide range of different educational backgrounds – chemistry, psychology, history and so on. There wasn't really an expected educational pathway, and that may still be true today although I think a formal education is perhaps more of an expectation these days. It's always helpful to study the subject you want to make into your career, but with some exceptions I wouldn't insist it's a necessity. Having knowledge of other subjects and disciplines is sometimes extremely useful – my performing arts experience certainly helps me when I give talks at conferences, even though I'm talking about code and not Shakespeare! What are the books helped you to improve your professional skills? One book I thoroughly recommend is Apps for all: Coding accessible web applications, by Heydon Pickering. Heydon has done the remarkable thing of writing a book about coding and accessibility that is both entertaining and extremely informative. What is your ideal work environment? Do you work at studio or prefer to mix a few activities? Everyone at TPG works from home because we live in different countries. We're a small tightly knit team though, so we're in regular contact on Skype, email and Twitter most of the time. I enjoy working from home – I'm usually online most of the time because I enjoy connecting with friends, colleagues and contemporaries around the world to talk about things – sometimes serious, sometimes not so much! Who are the people you admire most? I have enormous time and respect for the people I work with at TPG, including Steve Faulkner (@stevefaulkner), Karl Groves (@karlgroves), Henny Swan (@iheni), Patrick Lauke (@patrick_h_lauke), Ian Pouncey (@ianpouncey), Billy Gregory (@thebillygregory), Hans Hillen (@hanshillen) and Gez Lemon (@gezlemon). Beyond TPG people like Heydon Pickering (@heydonworks), Jamie Knight (@jamieknight), John Foliot (@johnfoliot), Alastair Campbell (@alastc), Bruce Lawson (@brucel), Dan Brickley (@danbri) and too many more to mention! There are a lot of brilliant people doing a lot of wonderful things out there on the interwebs.
OPCFW_CODE
Xamarin Forms Content Page: Change flow direction to Right to Left? how can I change the flow direction of a Xamarin Forms Content Page from Left to Right to Right to Left? something like Windows Phone Flow Direction property? @Hodor thanks, there is no support for such a thing (at least at this time). a workaround for this when using list views is to create multiple list item templates with LTR/RTL directions and use them accordin g to the current UI culture. Another workaround for other controls is to implement a renderer for each control type and change its HorizontalOptions or XAlignment according to the UI culture. Hello, How can I write a renderer for each control for rtl support? can you please give me a sample? is this option possible for android,IOS and Windows phone? Hi, please check this guide: https://developer.xamarin.com/guides/xamarin-forms/custom-renderer/ good luck it's 2017 and Xamarin forms does not support RTL yet.. It might be worth mentioning that the Grial UI Kit product has just added RTL support. More details here Try latest In latest Xamarin forms 3.0.0 If you’re making apps that support right-to-left languages, we have great news for you: Xamarin.Forms 3.0 makes it easier than ever to flip layouts to match language direction! When supporting languages such as Arabic and Hebrew that flow right-to-left, you may now tap into the very easy to use FlowDirection property on any VisualElement instead of using platform-specifics or effects as you may have used previously. Because you already know the direction the device prefers by accessing Device.FlowDirection, updating your UI could be as easy as adding this to the head of your page in XAML: FlowDirection="{x:Static Device.FlowDirection}" For more information about updating your applications to support right-to-left layouts: Right-To-Left Localization in Xamarin.Forms Blog You mean the navigation flow? Usually left => right means adding a page on the navigation stack and right => left means removing it. You can extend a navigation controller on the "native" C# code and make custom animations. There are many ways to do that. Here's an example in MonoTouch public partial class ScreenController : UINavigationController { private Page currentPage = null; private void setCurrentPage(Page p) { currentPage = p; //Using present View Controller: will set the current page to root. PresentViewController(new UINavigationController(currentPage.CreateViewController()) { ModalTransitionStyle = UIModalTransitionStyle.FlipHorizontal, }, true, null); //Using custom animation PushControllerWithTransition(this, currentPage.CreateViewController(), UIViewAnimationOptions.TransitionFlipFromLeft); } public static void PushControllerWithTransition(this UINavigationController target, UIViewController controllerToPush, UIViewAnimationOptions transition) { UIView.Transition(target.View, 0.75d, transition, delegate() { target.PushViewController(controllerToPush, false); }, null); } } Thanks, but I meant the layout direction, RTL or LTR. Is it possible to change it the same way we do it in the Windows Phone SDK? I don't think so. The only orientation you can change on layouts is between vertical and horizontal.
STACK_EXCHANGE
- What does it take to be an organizer of a WooCommerce meetup? - Do some research - A Guide To Popular Meetup Groups In Florence - How To Organise A Successful Meetup Group The advice and experience on picking meetup topics is fairly universal - ask your members. Valerie Runde sums it up for us with:. The approach to lining up guest presenters is as varied as the skits on Saturday Night Live. What does it take to be an organizer of a WooCommerce meetup? Overall, it seems to be a case of personal style, rather than a particular method, that works best. Our experts share their own thoughts on finding speakers and presenters too:. With your topic and speaker lined up, finding a place for your meetup now becomes top priority. Most groups meet at offices, coworking spaces or local bars. As far as our group goes, we make an effort to hold a few meetups in the surrounding suburbs so that we can reach the greater Philadelphia area. Send an email with helpful details a day or two before the meetup to ensure that your attendees find your meetup without trouble the day of. After creating the event on meetup. Let your coworkers know about each event and invite them to each one. Invite local clients and if your company has an email newsletter, no harm in including it in that either if you can swing it , bonus points if the email has a segment for local subscribers. Speaking of email, use the email feature on Meetup. - The Meetup Recipe for Startup Success! - Successes and failures from my three years of hosting freeCodeCamp meetups. - Stories About: Sports, Mothers-in-Law, & The Greatest Generation. - The Organizer Guide? - 7 tips for giving the best tech talks at meetups | Wiredcraft! Include helpful tips on parking, if there will be food or drinks provided, if its a suite inside of a building, etc. Yet the most important and most impactful way to promote your meetup is your attendees talking with their friends and coworkers about the quality content and relationships that were forged by attending. Only 4 of the 6 meetup group organisers said that they ever had sponsorship and none said they had consistent sponsorship. David Dylan Thomas shared that:. I hope to one day attend your Content Strategy meetup, please invite me! From the practical,. Have a question not answered here? Reach me on twitter nicolecherieh. After obtaining a M. Content Creation. Content Operations. Content Process. Content Strategy. Do some research All meetups must adhere by the Code of Conduct. Starting a Meetup. WooCommerce pays any dues from meetup. Organizers are listed as co-organizers, as are any existing co-organizers. We ask that you remove any requirements to join. WooCommerce meetup groups are open to all who are interested. A Guide To Popular Meetup Groups In Florence We ask that any member of the group be allowed to organize events rather than the organizers acting as gatekeepers. If someone wants to organize a Saturday morning coffee shop get-together that only 5 people attend and you want to organize a more formal presentation for 80 people, both of those are valued by us. What does it take to be an organizer of a WooCommerce meetup? You will be responsible for finding a donated venue that is appropriate for the number of people and location of the attendees. It does not need to be specific to WooCommerce but it does need to relate to WooCommerce in some way. For example, a talk on eCommerce marketing. You may need to find a sponsor to cover costs of the venue, food, and drinks. A company can host a meetup as a sponsor but the meetup is organized and run by an individual. We ended up pivoting half way through and having the instructor just give a demo of the language. The title says it all. We were trying to do too many things in a short amount of time. Even though the instructor was knowledgeable, the learners eyes started to glaze over and it seemed like they left more confused than before. Running a meetup is both rewarding and time consuming. Learn Forum News. Welcome to Developer News. This is a free, open source, no-ads place to cross-post your blog articles. Read about it here. Gwendolyn Faraday Read more posts by this author. Tweet this to your followers. In the beginning The first few meetings we had were filled with code and coffee. A note about Meetup. Tech interview practice We recruited nine local developers with experience giving technical interviews. Participant reviews from the meetup This event included practice answering various types of interview questions as well as a whiteboarding component. Mini Hackathons Collaboration is such an important skill to master. A team showing off what they built For the event, people can either choose whom they work with or randomly be selected for a team.go to link How To Organise A Successful Meetup Group Mental Health Meetup Mental health is something that no one really wants to talk about even though it disproportionately affects the tech industry source. Ed speaking at the meetup about mental health in the developer community Ed Finkler from Open Sourcing Mental Illness came and spoke with us about how we can properly help people and open up discussions about these problems in the workplace. I still feel bad about that one. In the future, we may have an online environment already set up before the event for participants to use. Intro to Node and Express with Mongo The title says it all. Tips for Running Meetup Groups Based on my experience, here are a few tips for running successful meetup groups.
OPCFW_CODE
Expansion tank vs. safety valve position I have this safety valve that comes with a non-return valve in it: I'm wondering if the expansion tank should be installed in x1 or x2 point: Also, should I get a safety valve with max. 8 bar, or max. 6 bar is good enough? more details: the real pressure in the pipe of cold water intake is 4 bar expansion tank states that the maximum pressure the tank can withstand is 8 bar safety valve stats say that the maximum pressure the valve can withstand is 6 bar the situation is as follows: I'm trying to connect the heat pump unit to the cold water intake. the heat pump unit has a tank inside and produces hot domestic water & also water for underfloor heating. The issue is that I am not sure if expansion tank should be installed between safety valve and heat pump unit or: safety valve should be installed between expansion tank and heat pump unit What exactly is the situation you are using this for? As already answered here https://diy.stackexchange.com/q/266605/18078 it goes as close to the pump intake as possible. Also, the pressures on this diagram make no sense at all. @RMDman the situation is "intake of cold water into the heat pump unit where there is a tank that produces hot water for domestic usage as well as for underfloor heating" @Ecnerwal I believe this is a different scenario from the previous question that was about loop for underfloor heating. this one is direct intake of cold water for production of domestic hot water. goes as close to the pump intake as possible - but which one? safety valve or expansion tank goes as close as possible? to explain those pressures... 4 bar is real pressure in the pipe. expansion tank maximum pressure is 8 bar. and the safety valve maximum pressure is 6 bar expansion tank should be installed between safety valve and heat pump unit That is your answer. This answer assumes that the non-return valve is to stop water flom flowing back from your heater into the cold water supply (correct me if I have assumed the purpose to be wrong here, but in a domestic water supply situation i.e. not a closed heating loop, that is the point of the check valve). The expansion tank should go between the check valve and the heater input. This allows the natural expansion of heated water to dissipate extra pressure into the tank instead of pushing back on the check valve and potentially damaging the valve or piping. Check valve, non-return valve, backflow prevention valve, are all interchangeable terms for the same thing. They all prevent fluid from flowing in both directions in a pipe. But yes, the expansion tank needs to go wherever expansion could cause damage - that is, after check valve, but before heater. The assumption is that pressure expansion in the other direction is safer because the pipe runs are longer. It also saves your boiler by giving pressure somewhere to go instead of blowing the TPR valve, which are not designed for frequent use.
STACK_EXCHANGE
got an error when try to install I don't know well about this error, but I think I make all thing corrected. This is an error: http://pastebin.com/sspcLLhW when I try to run ./meteor --help I got an error: pi@raspberrypi ~/meteor $ ./meteor --help It's the first time you've run Meteor from a git checkout. I will download a kit containing all of Meteor's dependencies. ######################################################################## 100.0% gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now Failed to install dependency kit. This is a path that I install: pi@raspberrypi ~/meteor $ which node /usr/local/bin/node pi@raspberrypi ~/meteor $ which npm /usr/local/bin/npm pi@raspberrypi ~/meteor $ which mongo /usr/bin/mongo pi@raspberrypi ~/meteor $ which mongod /usr/bin/mongod This is a version: pi@raspberrypi ~/meteor $ node -v v0.10.34 pi@raspberrypi ~/meteor $ npm -v 2.1.14 pi@raspberrypi ~/meteor $ mongo -version MongoDB shell version: 2.1.1 I'm new on Meteor but I want to start it with raspi Thanks you for make this. Hello Tom, I have the same exact problem. I have a Udoo Quad running Udoobuntu 1.1. I tried the steps explained above as well without success. Any idea? Hi Julian, can you send me the link to your image please (is it the UDOO official one?) I have a QUAD here and would like to test your problem locally. For us everything is running fine but we are using Ubuntu Core 14.04 LTS. I created that image by myself. Thanks for additional feedback Tom Thanks for answering. It is the current official one. V1.1Also, i'm using node version 0.10.22 which is the only one that builds in this distribution.Hoping to hear from you.Sent from Yahoo Mail for iPadAt Jan 28, 2015, 10:59:21 AM, Tom Freudenberg wrote:Hi Julian, can you send me the link to your image please (is it the UDOO official one?) I have a QUAD here and would like to test your problem locally. For us everything is running fine but we are using Ubuntu Core 14.04 LTS. I created that image by myself. Thanks for additional feedback Tom —Reply to this email directly or view it on GitHub. I'm also having an issue on getting it up and running on a UDOO Quad. Is it possible to send the link to the image ? So We can try that one ourself ? Thanks! Hello @TomFreudenberg, Have you been able to test it on the Udoobuntu version 1.1? Let me know if you need any information from my side. Hi guys ( @jrullan @enicky ) tonight I was able to create a new builder for meteor's stable release-<IP_ADDRESS> and get through your issues. I downloaded a fresh UDOObuntu_quad_v1.1.zip Ubuntu image from UDOO (http://www.udoo.org/downloads/) and run following commands as root: apt-get update apt-get dist-upgrade This shows some totem errors, so I corrected via: apt-get install -f Now we need the additional packages for meteor and the universal-bundler apt-get install git-core mongodb To run meteor we need a newer version for NodeJS than stored in repository. Therefore we have to build NodeJS by our own. The debs from ppa:chris-lea won't run at the official UDOO 1.1 image (node hung). Thanks a lot to Mortonvp (http://mortenvp.com/installing-a-newer-gccg-on-ubuntu-12-04-lts). He has made my day! You need to update gcc and some libraries – otherwise any build of NodeJS v0.10.23+ will fail also! Update gcc and libraries add-apt-repository ppa:ubuntu-toolchain-r/test apt-get dist-upgrade Here is how to proceed to build and install a running NodeJS v0.10.36 cd /tmp git clone https://github.com/joyent/node.git cd node git checkout -b v0.10.36 ./configure --without-snapshot make -j4 # use 4 cores on quad or -j2 for dual to speed up build process make install NodeJS is now installed at /usr/local/bin/node and /usr/local/lib/node_modules Test the installation via: node --version npm --version Both commands should return their info. If npm does not print anything, than your node!!! is not working. Next to do is meteor build: cd /usr/local/lib git clone https://github.com/4commerce-technologies-AG/meteor.git cd meteor ./scripts/generate-dev-bundle.sh ln -s /usr/local/lib/meteor/meteor /usr/local/bin/meteor For first time installation and to check the build, just get version info meteor --version should return something like: Unreleased, running from a checkout at As a short test I just created one of the examples: Please check what I have done here, some important information may be in Add missing link to non-core package npm-bcrypt (see also #1) cd /usr/local/lib/meteor/packages ln -s non-core/npm-bcrypt . Make sure that your LANG environments are set. This is missing on default and will cause a fail to start mongodb session! export LANG=C export LC_ALL=C Now you are ready to create and run the example cd /tmp meteor create --example todos cd todos meteor That's it :-) Hi Thomas ( @TPXP ), could you please have a look on my comment above for the "right" UDOO installation and check if this might a solution on RasPi as well. What I got from my last checks: Important to use GCC 4.7+ Important to build NodeJS from Joyent like described Important to use actual NodeJS for meteor (v0.10.36) Thanks for help. I will bring all the information here together to some blog entries for better documentation. Greetings, Tom Thanks Tom. I will test this on my next session with the Udoo. Sent from Yahoo Mail on Android From:"Tom Freudenberg"<EMAIL_ADDRESS>Date:Wed, Feb 18, 2015 at 3:21 PM Subject:Re: [meteor] got an error when try to install (#4) Hi guys ( @jrullan @enicky ) tonight I was able to create a new builder for meteor's stable release-<IP_ADDRESS> and get through your issues. I downloaded a fresh UDOObuntu_quad_v1.1.zip Ubuntu image from UDOO (http://www.udoo.org/downloads/) and run following commands as root: apt-get update apt-get dist-upgrade This shows some totem errors, so I corrected via: apt-get install -f Now we need the additional packages for meteor and the universal-bundler apt-get install git-core mongodb To run meteor we need a newer version for NodeJS than stored in repository. Therefore we have to build NodeJS by our own. The debs from ppa:chris-lea won't run at the official UDOO 1.1 image (node hung). Thanks a lot to Mortonvp (http://mortenvp.com/installing-a-newer-gccg-on-ubuntu-12-04-lts). He has made my day! You need to update gcc and some libraries – otherwise any build of NodeJS v0.10.23+ will fail also! Update gcc and libraries add-apt-repository ppa:ubuntu-toolchain-r/test apt-get dist-upgrade Here is how to proceed to build and install a running NodeJS v0.10.36 cd /tmp git clone https://github.com/joyent/node.git cd node git checkout -b v0.10.36 ./configure --without-snapshot make -j4 # use 4 cores on quad or -j2 for dual to speed up build process make install NodeJS is now installed at /usr/local/bin/node and /usr/local/lib/node_modules Test the installation via: node --version npm --version Both commands should return their info. If npm does not print anything, than your node!!! is not working. Next to do is meteor build: cd /usr/local/lib git clone https://github.com/4commerce-technologies-AG/meteor.git cd meteor ./scripts/generate-dev-bundle.sh ln -s /usr/local/lib/meteor/meteor /usr/local/bin/meteor For first time installation and to check the build, just get version info meteor --version should return something like: Unreleased, running from a checkout at As a short test I just created one of the examples: Please check what I have done here, some important information may be in Add missing link to non-core package npm-bcrypt (see also #1) cd /usr/local/lib/meteor/packages ln -s non-core/npm-bcrypt . Make sure that your LANG environments are set. This is missing on default and will cause a fail to start mongodb session! export LANG=C export LC_ALL=C Now you are ready to create and run the example cd /tmp meteor create --example todos cd todos meteor That's it :-) — Reply to this email directly or view it on GitHub. Hi all, I close this issue because it handles more than one question and all might be solved after latest update and description. Please read more documentation about installation and known bug-handling at new blog: http://meteor-universal.tumblr.com/. If you still having problems in running meteor, please open a new issue with details. Thanks for your support, Tom
GITHUB_ARCHIVE
Form a distance matrix in Julia I'm given a 20*122 matrix p. Each row of matrix is a 20-dim vector. I want to calculate the distance between each vector and form a distance matrix. Here's my code mul = [] for i in 1:size(p,1) push!(mul,norm(p[1,:]-p[i,:])) end mul = transpose(tiedrank(mul)) for i in 2:size(p,1) for j in 1:size(p,1) mul2 = [] push!(mul2,norm(p[i,:]-p[j,:])) end mul = vcat(mul,tiedrank(mul2)') end mul I got the error that UndefVarError: mul2 not defined How to fix this code? Your code will be super slow, even if you can make it run. Untyped containers (mul = []) are slow, iterative push! is slow, iterative vcat is slow, slices are slow, and operating along rows is slow, since Julia arrays are column major. Worst of all, working in global scope is super slow, make a function instead. I think it is fair to say that your code is very nearly as inefficient as theoretically possible @DNF -- You forgot the final QED. Funniest proof I saw in a while. Use the Distances.jl package: julia> using Distances julia> p = rand(20, 122) 20×122 Matrix{Float64}: 0.830266 0.938016 … 0.919852 0.549327 0.337624 0.863166 0.917122 0.601121 ⋮ ⋱ ⋮ 0.122402 0.85733 0.111437 0.694836 0.0791678 0.763321 0.968744 0.279512 julia> pairwise(Euclidean(), p) 122×122 Matrix{Float64}: 0.0 2.21042 … 1.58048 1.94589 2.21042 0.0 1.71839 1.95506 ⋮ ⋱ ⋮ 1.58048 1.71839 … 0.0 1.73247 1.94589 1.95506 1.73247 0.0 One of the nice features of Julia is that you can copy-paste an efficient algorithm from a paper and get it working in a 1-to-1 lines of code. Here is how you might write that in Julia: p = rand(20, 122); G = p * p'; D = sqrt.(diag(G) .+ diag(G)' .- 2 .* G); This is exactly what you get if you use a package like Distances.jl, using Distances D = pairwise(Euclidean(), p') Note that I used p = p' in both methods because you wanted the distance between row vectors. Remember the dot in front of the minus sign, and also dot the multiplication: sqrt.(diag(G) .+ diag(G)' .- 2 .* G). Otherwise you get unnecessary allocation of temporary matrices. Yes, you're right. I was just copying the algorithm without paying attention to performance at all. It makes a speed difference of about 12% with 40% less allocations. You define mul2 inside the loop. Move the definition one line earlier.
STACK_EXCHANGE
- Learn Linux - Learn Electronics - Raspberry Pi - LPI certification - News & Reviews KidSafe is no longer under active development. It is left here for historical reasons and for anyone that is still running the code, but will not have any future updates based on teh current codebase. As a parent of two children I want my kids to be able to access the Internet to allow them to learn and play. I am also concerned about some of the material on the Internet and the risk to my wallet if they inadvertantly purchased goods online through an app store or one-click online shop. I have therefore created Kidsafe as a way to protect my children from the dangers of the Internet without having to sit over them whenever they go online. Rather than handover the role of monitoring my children's access to a computer Kidsafe provides a means for parents to work alongside their children by creating a fenced safe area of the Internet that can increase as the children become more aware of the dangers and gain a better understanding of how to keep themselves safe. Kidsafe helps protect members of the family from inappropriate content on the Internet. It can protect all computers in the home, including tablets and games consoles without needing to install additional software on the computers. Kidsafe puts the parent in control of which sites your children can visit on the Internet. Running on the Raspberry Pi (a low cost, low power computer) it keeps a watch on the sites your children visit to make sure they don't stray to an inappropriate website. The parent can easily approve or override access to ensure that the child can still access sites that they may need for homework, and the parent can still use their favourite social networking sites whilst still protecting younger children. Using existing open source software and some custom application code all websites accessed are checked against an approved list appropriate to the user's age group. The parent can add the appropriate websites through the web page to permit only those sites that they consider to be appropriate. Whenever the child tries to access an unapproved site they are presented with a page where the parent can check the site before adding it to the approved list. Different usernames are provided for young children, teenagers and adults so that each user can visit the sites appropriate to their age group. This allows an increasing level of access and trust on older children. The short video below shows how easily my child who was previously working independantly can ask for assistance and have access granted to a particular website. This is a preview release of some of the new features planned for a future release. This includes the new dashboard, making it easier to manage the rules and users. It also has a new look. Significant changes are planned in the future so this should still be considered beta at this stage. Version 0.2.0 has undergone more testing and should be considered the more stable version. Includes important security fix There was a software bug in the previous versions that could result in sessions from not expiring correctly. This would only be a problem when the loglevel has been manually changed. It is strongly recommended to update to version 0.2.0 New dashboard added with the ability to view and manage users and rules. Also includes bug fixes in the main code and the mysql installation instructions. New release - primarily bug fixes, but also adds new index page with the ability to logoff. Work is already underway on 0.2 which will add an initial Dashboard for parents to adminster some of the basics (eg. new users). I'm concentrating on the functionality for now, the "easier" install process will follow at a future date after that. Initial release, working but hard to install. Due to other commitments I am not able to actively work on an updated version. The code works and I will continue to address any direct issues, but I am able to actively develop the code at the moment. The software is available to download as two compressed tar files and a text file with the SQL commands to setup the database. You should follow the installation instructions to install these onto the Raspberry Pi. Please view the copyright information regarding use of the circuits.
OPCFW_CODE
Network Appliance NS0-520 NetApp Certified Implementation Engineer – SAN, ONTAP Online Trainingexams Network Appliance NS0-520 Online Training The questions for NS0-520 were last updated at Dec 01,2023. - Exam Code: NS0-520 - Exam Name: NetApp Certified Implementation Engineer - SAN, ONTAP - Certification Provider: Network Appliance - Latest update: Dec 01,2023 A customer is testing a dual-fabric FC SAN configuration as shown in the exhibit. The zones are implemented on the switches shown below. Switch 1 /Zone 1: HBA 0, LIF_1, LIF_3, LIF_5, and LIF_7 Switch 2 /Zone 2: HBA 1, LIF_2, LIF_1, LIF_6, and LIF_8 IF all the nodes are in the SLM reporting nodes list, how many paths per LUN should the customer expect when simulating a node failure by powering off Node 04? - A . 4 paths - B . 6 paths - C . 8 paths - D . 2 paths You are asked to troubleshoot performance issues in a customer’s SAN environment. After reviewing the NetApp SAN and hosts for best practices, you have narrowed down the issues to the fabric switches. In this scenario, which component should be verified? - A . F_port config - B . FLOGI database - C . loss sig counters - D . BB_Credit counters Which statement is true about expanding an aggregate from 32-bit to 64-bit in place? - A . All aggregates are automatically converted from 32-bit to 64-bit with the Data ONTAP 8.1 upgrade. - B . The expansion is triggered by an aggr convert command. - C . The expansion is triggered by adding disks to exceed 16 TB. - D . The 32-bit aggregates are degraded and must be Volume SnapMirrored to a new 64-bit aggregates with Data ONTAP 8.1 upgrade. cl01::>lun resize Cvserver svm1 Cvolume db1_vol Clun db1 Error: command failed: New size exceeds this LUN’s What is the reason for the error shown in the exhibit? - A . The LUN is thin provisioned, but the parent volume is thick provisioned and cannot satisfy the request. - B . The aggregate over commitment threshold has been exceeded. - C . The parent volume that contains the LUN is out of space. - D . The LUN cannot be grown past its maximum resize size. A copy of a LUN was created by using the lun copy command. To allow the copy to be seen on the same host, what would you do? - A . Change the serial number of the copied LUN. - B . Configure SLM on the new node. - C . Create a new port set. - D . Map the LUN to the same igroup. Based on the exhibit, you are configuring a 4-node cluster with an iSCSI LIF ob each node. You connect using the first LIF from a Windows Server 2012 host. How would you add the other three LIFs? - A . Use the MCS button. - B . Use the Devices button. - C . Use the Add session button. - D . Wait and then press the Refresh button until the other sessions appear. A customer has an existing large aging FC SAN environment that is reaching end of support. The environment consists of several large database servers with attached LUNs. You propose using an AFF A250, but you need to gather all end-to-end information in the current environment. In this scenario, which resource contains this information? - A . SAN Health Check - B . Interoperability Matrix Tool (IMT) - C . Active IQ OneCollect - D . Hardware Universe (HWU) A customer recently added two nodes with FC ports to a 4-node cluster. Which SVM configuration is needed before the customer creates LUNs on the new nodes? - A . The FC service needs to be enabled on the data SVM. - B . The existing LIFs need to be migrated to the new nodes. - C . New FC LIFs need to be created on the cluster SVM. - D . New FC LIFs need to be created on the new nodes.
OPCFW_CODE
We won't be reinventing the wheel here, so just extract the SWELL grammar file swell.vll from the SWELL.zip file. You will need the grammar-file to follow the descriptions and examples below. Our DSL interpreter will need a parser to analyze SWELL scripts, and since the SWELL grammar is quite elaborate we must use a proper parser-generator. However, we would like to avoid the theory and formalisms (as much as possible) and use the simplest tools and approach. So, while there are many good parser generators, our choice for the job is the simple, user-friendly, and easy to learn VisualLangLab. To save space, we won't say much about the tool here, but refer interested readers to the Grammar without Tears article. The SWELL grammar was developed using VisualLangLab, and you can use the same tool to inspect and review the grammar. You can also test-run the grammar by feeding it snippets of SWELL. Within VisualLangLab, grammars are like gentle herbivores in an open zoo, and they don't bite your hand if you offer them the wrong kind of leaf or straw. You can also modify the grammar, and see how that changes its behavior, and the AST produced. If you haven't downloaded VisualLangLab yet, get the file now. Start VisualLangLab by double-clicking this file (Linux, Mac OS, and UNIX users will need to chmod +x VisualLangLab.jar). Select File -> Open from the VisualLangLab main menu, choose script.vll in the file-chooser dialog, and click the "Open" button. You should see the grammar-tree for the top-level parser-rule, Suite, displayed as in Figure-2 below. Figure 2. Initial grammar view The GUI's menus, toolbars, and displays, and the grammar-tree's icons and annotations are explained in the Grammar without Tears article, but if you need additional help take a look at The GUI. And if you need help with the icons or any other part of the grammar-tree, take a look at Editing the Grammar Tree. A somewhat more complex (but unfortunately essential) concept is the VisualLangLab AST. A VisualLangLab grammar contains many separate grammar-trees (or parser-rules). You can display any grammar-rule by selecting its name in the toolbar's drop-down list (in the box near the red "A" in Figure-2 above). All these apparently independent grammar-trees actually constitute one large tree rooted at the top-level rule (Suite in this case). You can navigate up any branch of the tree by right-clicking any Reference node and selecting Go to from the grammar-tree's context-menu as shown in Figure-3 below. Figure 3. Navigating up the singleTest branch This process (right-clicking a Reference node and selecting Go to) can be repeated till you reach a grammar-tree that has no Reference nodes. The grammar-trees in Figure-4 below trace the path from Suite (the root) through singleTest, runStatements, stmtEnterText, and swingQuery to treePath. Figure 4. Navigating grammar-tree references Later in the article, when we discuss SWELL Internals, the grammar-trees in Figure-4 will be used to illustrate how application code (in the SWELL interpreter) is interfaced to the parser. To follow that discussion an understanding of each grammar-tree's AST will be required. The AST is displayed in the text area to the right of the grammar tree (see red box marked with a "B" in Figure-2 above). The information shown is the AST of the currently selected node, so to see the AST of the entire grammar-tree select (click on) the grammar-tree's root node. Also, the radio button marked with a "§" (for Depth, near the red "C" in Figure-2 above) should be selected. More details can be found in AST Structure. Actually running a parser-rule with different inputs gives you greater insight into the grammar, and we recommend trying to run some of the grammar-trees in Figure-4 above. Figure-5 below shows the simple steps required to test-run the selected grammar tree. Type the input text into the Parser Test Input area (the red box marked "A"), click the Parse input button (near the red "B"), then validate the parser's output (after the words result follows: in the red box marked "C"). If you see any error messages in red in place of the parser's result, the input did not match the grammar. Figure 5. Testing the treePath grammar-tree Figure-6 below shows the stmtEnterText parser-rule (middle of Figure-4 above) being tested. Observe that to parse the input provided, stmtEnterText needs to invoke swingQuery as well, but does not require treePath. Figure 6. Testing the stmtEnterText grammar-tree Testing Parsers has more detailed information about approaches to testing within the VisualLangLab GUI.
OPCFW_CODE
RFC: Lambda Powertools for Python v2 Is this related to an existing feature request or issue? No response Which AWS Lambda Powertools utility does this relate to? Powertools itself Summary The Python 3.6 Lambda Runtime was deprecated as of Aug 17th 2022. This follows Python 3.6 End-Of-Life (EOL) reached on December 23, 2021. This means it's not possible to create or update Lambda functions using that runtime. We should take this opportunity to release a new major version of Powertools, and decide what breaking changes to include. Use case Customers need to upgrade their Lambda Functions to one of the supported Python runtimes (at the moment of writing, Python 3.7-3.9). We could use this major upgrade to introduce some breaking changes on Lambda Powertools that will simplify the code and/or improve support for existing features. Proposal We strive to make minimal breaking changes due to timing constraints with Python 3.6 deprecation. In ideal scenarios, we would provide ample time, nightly builds, a Beta for a major version, including a linter to help detect and upgrade from breaking changes. This means, V2 will focus primarily on the following themes: Drop Python 3.6. Follows alignment with Lambda runtime deprecation policy. It also allows us to upgrade development dependencies and documentation niceties we couldn't before due to prolonged Python 3.6 EOL life in Lambda. Make all dependencies optional to optimize package size. AWS SDK makes up for over 90% of our package size. We also heard from customers that they want to use additional observability providers, thus wanting X-Ray SDK to be optional. Lambda Layer will include all optional dependencies excluding those already available at Lambda runtime to further optimize cold start. This also means dropping SAR Extras as it becomes redundant. Remove deprecated features. Specifically sqs_batch_processor and PartialSQSProcessor in favour of BatchProcessor launched 11 months ago that improved accuracy, security, and speed due to the new Lambda integration. Improve correctness. Event Handler (API Gateway) doesn't support multi-header and cookies by default as it requires a breaking change. Similarly, Idempotency and Tracer doesn't use fully qualified names, impacting ABC or Classes that use the exact same method name - changing it would impact billions of transactions in production. Quick summary Item Issue/PR Status Code change required Write What's new for v2 Update upgrade guide #1623 ✅ Remove POWERTOOLS_EVENT_HANDLER_DEBUG env var #1620 ✅ Drop support for python 3.6 #1339 ✅ Remove the old batch processing legacy implementation (sqs_batch_processor, PartialSQSProcessor) #1462 ✅ Yes Event Handler REST - multi-value Headers by default, and cookies support #1455 ✅ tests only Use fully qualified names for idempotency #1330 ✅ Use fully qualified names for tracer subsegments #1454 ✅ Update AppConfig API in Parameters/Feature flags due to GetConfiguration deprecation #1506 ✅ Deprecate SAR Extras #1543 ✅ Make all runtime dependencies optional #1164 ✅ Update docs on required dependencies (validation, parser) #1573 ✅ Event Handler REST v1 supports trailing slash route by default #1609 ✅ Replace AttributeValue in DynamoDBStreamEvent with deserialized Python values #1619 ✅ Yes Replace email-validator dependency with str in Parser SES Model #1608 ✅ Launch plan [x] un-comment "deploy-prod" from publish_v2_layer.yml [x] update docs to include SAR v2 and Layer v2 [x] review Upgrade Guide [ ] review What's New [ ] remove V2 banner from docs [ ] remove V2 admonition from docs Out of scope We should make the breaking release as small as possible. For this we should not include in v2: modularization to further reduce sizes Add a default unit for the metrics utility #1180 Potential challenges We need to decide what to do with the Lambda Powertools Layer. One option would be to release a new "V2" layer. UPDATE: We're launching a separate V2 layer and ARM support along with it! Dependencies and Integrations No response Alternative solutions No response Acknowledgment [X] This feature request meets Lambda Powertools Tenets [ ] Should this be considered in other Lambda Powertools languages? i.e. Java, TypeScript Thank you @rubenfonseca! Let's make Event Handler Headers part of v2 as this will impact our ability to support cookies and multi-value headers without a maintenance hit -- https://github.com/awslabs/aws-lambda-powertools-python/pull/1455 We should also add that we're going to make minimal breaking changes due to not having ample time to discuss and run experiments with customers - we shall leave those to 3.0 (modularization, potentially making pydantic the default, etc.) @heitorlessa thanks, updated! Do you know if an existing issue about removing the first batch processing implementation? Nope, haven't created yet. We'd need to make a PR to add a deprecation warning too. https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/aws_lambda_powertools/utilities/batch/sqs.py#L24 We started creating an "Upgrade guide" with detailed upgrade steps for each breaking change, together with before/after examples. This will help everyone go faster through each major version changes. Early days but we started documenting the breaking changes in v2: https://awslabs.github.io/aws-lambda-powertools-python/v2/upgrade/ @rubenfonseca one thing we missed is adding another bullet point on multiValueHeaders for REST v1 payload I'd love to see strict typing supported in v2. As an example: https://github.com/awslabs/aws-lambda-powertools-python/issues/1089 Added #1506 to v2 @kapilt and everyone else interested in smaller package size - @rubenfonseca just made a breakthrough for V2. Our new Lambda Layer (v2) will be just under 1M with all dependencies installed (using AWS SDK available at Lambda runtime, of course). For V2 launch this month, @rubenfonseca is also completing the work for an ARM Lambda Layer. Last update before we launch 2.0 this week release manager: @rubenfonseca We're working on one last change: Event Source Data Classes DynamoDB Stream Event. Return deserialized Dict for new_image and old_image instead of AttributeValue. This will unblock customers accessing nested map objects, simplify access to DynamoDB record data, and near-interoperability with Boto3 TypeDeserializer (exception being str instead of Decimal). [PR] In parallel, we're updating (1) GitHub Actions workflows to support v1 and v2 releases, (2) Upgrade Guide to ensure it is as clear as it can be, (3) clean up this RFC body to reflect where we are and share V1 will be in maintenance mode until Jan 31st. Until end of the week You should expect a (A) lengthy Release Notes (What's New in 2.0), a (B) detailed Upgrade Guide for three breaking changes that might affect you, (C) a significant package size reduction (10M-->204k), and a (D) much improved Lambda Layer with all dependencies optimized (14M-->2M), including ARM support! @mew1033 @peterschutt after V2 is launched, let's bring typing_extensions as a runtime dependency to start working on strict typing for the entire code base. We've made significant efforts in reducing package size (>95%), and bringing typing_extensions outweigh the effort of various workarounds to make typing work for multiple Python versions. This will open a future door that we could use mypyc to compile certain parts of our code base to be a C-extension for extra speed. Now that we fully migrated our Lambda Layer pipeline and already use Cython for Parser, it'll be a no brainier to generate platform wheels too when the time is appropriate. @mew1033 @peterschutt after V2 is launched, let's bring typing_extensions as a runtime dependency to start working on strict typing for the entire code base. We've made significant efforts in reducing package size (>95%), and bringing typing_extensions outweigh the effort of various workarounds to make typing work for multiple Python versions. Sounds great! I'll touch base with you after the v2 launch, congrats! All Ready! We're gonna publish a pre-release to test all workflows, and then we start writing the official Release Notes for 2.0. Finally! 🎉 We just released AWS Lambda Powertools Python v2! 🎉🎉🎉🎉 Don't forget to check our Upgrade Guide too :) Thank you everyone for your help getting this our of the door! 🙏 Onwards and upwards! 🎉
GITHUB_ARCHIVE
import re from typing import Dict, List, Union import requests from ksamsok import KSamsok class UGC: def __init__(self, endpoint: str = 'https://ugc.kulturarvsdata.se/', key: str = None) -> None: self.endpoint = endpoint + 'UGC-hub/' self.key = key self.headers = { 'User-Agent': 'SOCH-UGC.py' } self.soch = KSamsok() self.relation_types = list([ 'sameAs', 'isDescribedBy', 'describes', 'visualizes', 'hasPart', 'isPartOf', 'isVisualizedBy', 'isContainedIn', 'author', 'authorOf', 'hasBeenUsedIn', 'isRelatedTo', 'architectOf', 'architect', 'user', 'userOf', 'child', 'mother', 'father', 'photographerOf', 'photographer', 'isMentionedBy', 'mentions' ]) def get_total_items_count(self) -> str: url = '{}api?method=retrieve&scope=count&objectUri=all&format=json'.format(self.endpoint) data = self.make_get_request(url) return data['response']['relations']['numberOfRelations'] def get_item(self, item_id: Union[int, str]) -> Union[Dict, bool]: url = '{}api?method=retrieve&objectUri=all&contentId={}&scope=single&format=json'.format(self.endpoint, item_id) data = self.make_get_request(url) if data['response']['relations'][0]['id'] is 0: return False return data['response']['relations'][0] def search_items(self, uri: str = 'all', offset: int = 0, limit: int = 50) -> List: url = '{}api?method=retrieve&scope=all&objectUri={}&selectFrom={}&maxCount={}&format=json'.format(self.endpoint, uri, offset, limit) data = self.make_get_request(url) return data['response']['relations'] def delete_item(self, item_id: Union[int, str]) -> bool: url = '{}api?x-api={}&method=delete&objectId={}&format=json'.format(self.endpoint, self.key, item_id) data = self.make_get_request(url) if not self.key: raise ValueError('This action requires an API key.') if data['response']['result'] == 'SUCCESS': return True return False def create_item_relation(self, kulturarvsdata_uri: str, relation: str, target: str, user: str, comment: str = None) -> bool: kulturarvsdata_uri = self.soch.formatUri(kulturarvsdata_uri, 'rawurl') if not kulturarvsdata_uri: raise ValueError('{} is not an valid Kulturarvsdata URI.'.format(kulturarvsdata_uri)) if relation not in self.relation_types: raise ValueError('{} is not a valid relation type.'.format(relation)) if not self.valid_relation_target(target): raise ValueError('{} is not a valid target.'.format(target)) if not self.key: raise ValueError('This action requires an API key.') url = '{}api?x-api={}&method=insert&scope=relationAll&objectUri={}&user={}&relationType={}&relatedTo={}&format=json'.format(self.endpoint, self.key, kulturarvsdata_uri, user, relation, target) if comment: url = '{}&comment={}'.format(url, comment) data = self.make_get_request(url) if data['response']['result'] == 'SUCCESS': return True return False def valid_relation_target(self, target: str) -> bool: if target.startswith('http://kulturarvsdata.se/'): if not self.soch.formatUri(target, 'rawurl'): return False return True if target.startswith('https://commons.wikimedia.org/wiki/File:'): return True if target.startswith('https://commons.wikimedia.org/wiki/Category:'): return True if target.startswith('http://www.wikidata.org/entity/Q'): return True if target.startswith('http://commons.wikimedia.org/entity/M'): return True if target.startswith('http://kulturnav.org/'): return True if target.startswith('http://viaf.org/viaf/'): return True if target.startswith('http://vocab.getty.edu/ulan/'): return True if target.startswith('http://iconclass.org/'): return True if target.startswith('http://data.europeana.eu/'): return True if re.match(r'^https:\/\/libris\.kb\.se\/.{14,17}$', target): return True if re.match(r'^https:\/\/\w{2}\.wikipedia\.org\/wiki\/.+', target): return True return False def make_get_request(self, url: str) -> Dict: r = requests.get(url, headers = self.headers) # note UGC always returns 200 codes for now. if r.status_code is 401: raise PermissionError('Bad API key.') data = r.json() if 'error' in data['response']: raise Exception('Unknown error: {}'.format(data['response']['error'])) return data
STACK_EDU
See what’s new in the latest version of the app. Today, we launched a few new videos, one on our homepage that shows a bunch of Refraction utilities working together, and one for our VS Code Extension. We also rolled out a small improvement to our dashboard for teams. Also we've rolled out support for 12 new languages: ABAP, Ada, Apex, Batch, CameLIGO, Cobol, D, Fortran, Julia, PascaLIGO, Pug and Racket. Another big update today! We have two new features: Also, we added support for 9 new languages: Clojure, F Sharp, Handlebars, Liquid, Powershell, Solidity, Visual Basic .NET, XML and YAML. We also launched feature pages on our website, like Unit Tests, so you can see how these utilities work before using them! You can find them all on our Features page. Lastly, if you're an organization wanting to start a business plan but need that extra certainty, we launched our Data Security Policy by special request. That's all for today! Happy hacking. A small update today! We launched the Refraction Blog, a place for us to share our thoughts on code, AI and the future of software engineering. We're using Midjourney to generate all the cover photos. Going for a magic-y theme, hope you enjoy it! Check it out. We released a brilliant update today that replaces Refraction's built-in editor with the same one that powers VS Code! That means you get features like Code Lens, IntelliSense, Typescript checking, multiline editing and most of the other fantastic features that VS Code offers by default. We rolled out support for Assembly today (thanks for the request, Joaquim!) Additionally, we improved our AI endpoints and started work on a secret project 🤫 Today we're releasing our highly-requested Refraction for teams! You can now create a team, invite members and benefit from: Plus more awesome stuff coming soon. Some other neat updates: A few neat little updates today! First up, we released a couple of social login providers! You can now create a new Refraction account using your GitHub or Google accounts. Existing accounts can't be merged... yet 🙂 We also released persisted languages! This means that once you pick a language in any tool (like Unit Tests), that language becomes the default for any other tool you use! Thank for the fantastic suggestion, Roberto. By request, we released support for Less and SCSS / Sass, so you can now test, refactor and improve your CSS preprocessor code! Thanks for the callout, Joaquim. Last but not least, a small set of bugfixes and improvements. Have fun! ✌️ Today we rolled out a new design for the History page, making it easier for you to browse your previously generated code and prompts. We also implemented Gravatar for default avatars as well! If you have a Gravatar set up, you'll notice your it will now appear. We updated our UI to make it easier to move between tools! The new sidebar breaks down Refraction utilities into three areas: Improve, Generate and Learn. We fixed a couple of minor language-based bugs on the Style check and Create functions utilities, as well as improved them so they give better responses. If you're a web engineer, you'll like this next bit: we migrated the entire app to Next.js 13's app directory! It was a massive undertaking but now we get to enjoy things like React Server Components! As part of this, we open-sourced a library we developed called gpt-encoder. It's a browser-based implementation of OpenAI's token-encoding algorithm. Last but not least: TONS of other bugfixes and improvements. Until next time ✌️ Massive update today! By updating to a new AI model, we've drastically increased the performance of generating AI code. You should see these effects immediately. We also improved usernames in Canny for subscribed users and improved the response formatting of some of our features. Lastly, we have a new changelog page so you see what’s new in the latest version of the app. Have fun and let us know what you think! Use the power of AI to automate the tedious parts of software development like testing, documentation and refactoring, so you can focus on what matters. © 2023 Beskar Labs. Powered by Open AI. All rights reserved. Legal policies.
OPCFW_CODE
What makes a great software engineer? The following observations and institutional learnings are based on years of working in and with software teams of different sizes, across a number of industries, technologies and architectural complexities. They are in no particular order because in reality, priorities for each of these will shift based on context. What is important though, is that these are all positive traits, and engineers who display these, would almost certainly be productive and most likely to accelerate through their career. Attitude over aptitude: A great attitude can take decades of personal development but becoming technically competent takes a fraction of that time. A competent team will have the resources to rapidly teach, mentor and develop a junior engineer who doesn’t necessarily start out with the sharpest technical skills. A negative attitude will invariably prevent or hinder learning. Building competencies for people with tertiary-level education is usually far less onerous than changing their attitude. An ability to relate on a personal level to colleagues and clients is crucial, so to be a great engineer, you need to be the engineer whom everyone wants to work with. Being a technical guru is insufficient if nobody wants to work with you. Great software engineers avoid condescension, narcissism and an inflated ego. In communications, this means they are able to listen attentively, be assertive and concise. Great engineers also able to present themselves without aggression. Awareness and analytical nature: Some engineers struggle with ‘tunnel vision’ into the solution space, where they focus too much on the solution without properly evaluating the problem. Great software engineers are astute, able to place requirements in context and maintain awareness of the problem-space while working through the solution. This awareness should pervade all aspects of their work, especially the ability to be aware of what already exists in a given solution, and how it can be reused to solve requirements more efficiently or in ways that weren’t directly suggested. One of the hallmarks of great software engineers is a shrewd talent for solving problems on their feet in meetings, workshops and pair-programming scenarios. Maturity in approach: Great software engineers apply their mind to all aspects of the engineering process in a considered and non-selfish way. Here are a few examples of what this means in practice: - Seeking constructive criticism – great engineers ask what they could be missing in a design, how would it not work and whether it is understandable to the users and the rest of the team - Asking for the background of a requirement to see how it fits into things and seeing the ‘big picture’ – great engineers look to stabilise requirements before rushing ahead with development. For example, similar work items should be checked to see if there is an overlap, offering an opportunity to combine or reuse - Taking initiative – when working on a specific requirement, great engineers will explore ways to improve the solution beyond the exact provided requirement - Communicating regularly – status reports, stand-up meetings and involvement in the broader software and client context are all easy ways to be noticed though excellent service. Career-path intuition: A career in software engineering is vibrant and varied so it is difficult to have a template for growth that can be applied to junior engineers in a blanket fashion. Being senior is less about how many years you have worked and more about your maturity, competence and value you can provide to a team. The best software engineers see this for what it is – an opportunity. Furthering a career in the field is about understanding the context and demonstrating an insight into what makes a software engineer a ‘senior’ or a ‘lead’ within that organisation. Armed with that knowledge, great software engineers can tailor their skills and initiative to push themselves further. Ultimately, what separates great engineers from good engineers is a natural maturity in approach, personality and attitude that goes beyond the expected competence in the technical aspects of the software craft. This might seem counterintuitive given the implied technical depth in the job and to be fair, in many positions, an expert level technical competency is an absolute minimum. However, the point is that often, technical strength will only get you through the door, where these ‘softer’ aspects then take over.
OPCFW_CODE
Expose resolvePath to JavaScript Bundling and snapshotting android application requires a way to resolve a relative module path to an absolute one so expose runtime resolvePath method. I have a question 😄 It looks to me that resolvePath() looks for existing files. Does this mean that module files should be present in the final package even when reading them from the snapshot? If we want this to work the modules should be present in the package. If we don't want to include them in the package we should then include the package.json files in the bundle and create another runtime method that doesn't rely on File in order to retrieve the path. The better approach would be to retrieve the absolute path of a module during bundling, so I am closing this one I'm receiving the following error (https://github.com/NativeScript/android-runtime/issues/149) with this PR: JNI ERROR (app bug): local reference table overflow (max=512) local reference table dump: Last 10 entries (of 512): 511: 0x12e46c40 java.lang.String "/data/data/org.n... (79 chars) 510: 0x12e35220 java.lang.String "../../observable... (34 chars) 509: 0x12e42530 java.lang.String "/data/data/org.n... (96 chars) 508: 0x12e46980 java.lang.String "/data/data/org.n... (79 chars) 507: 0x12e351c0 java.lang.String "../../observable... (34 chars) 506: 0x12e46350 java.lang.String "/data/data/org.n... (78 chars) 505: 0x12e45e80 java.lang.String "/data/data/org.n... (79 chars) 504: 0x12e2eee0 java.lang.String "../../Observable" 503: 0x12e0b940 java.lang.String "/data/data/org.n... (84 chars) 502: 0x12e45640 java.lang.String "/data/data/org.n... (75 chars) Summary: 2 of java.lang.Class (2 unique instances) 1 of java.lang.String[] (3 elements) 509 of java.lang.String (509 unique instances) Check failed: count_ == 0 (count_=-1, 0=0) Attempted to destroy barrier with non zero count Runtime aborting --- recursively, so no thread-specific detail! This happens when a JNI local ref is not deleted after it's used. 509 of java.lang.String (509 unique instances) You generate strings as a JNI local ref, without releasing them, so I might suggest either releasing them with env->DeleteLocalRef(jniStrObj), or making them global, with env.newGlobalRef(jniStrObj), if you need them for the whole duration of the program. @KristinaKoeva better yet you can use JniLocalRef class like so: https://github.com/NativeScript/android-runtime/blob/7c4f2d4ee10d6ba39d6b3e499f4519a159f85a2c/src/jni/CallbackHandlers.cpp#L141
GITHUB_ARCHIVE
How to interpret decreasing AIC but higher standard errors in model selection? I've got a problem choosing the right model. I have a model with various variables (covariables and dummy variables). I was trying to find the best size for this model, so I first started by comparing different models with AIC. From this it followed, that the minimum AIC was reached when allowing all variables to stay in the model (with the whole bunch to interact with all dummies). When I compute the summary of the model, all effects are absolutely not significant and the standard errors are very high. I was a bit confused, when comparing the "best" (on AIC) model with a smaller model with any interaction. The smaller model had small standard errors and nice p-values... But the AIC is higher compared to the big model. What might be the problem? Overspecification? I really need help in this, because I have absolutely no idea how to handle this! Thanks a lot Thanks! This is a good hint! I'll keep myself busy in studying cross validation! I know the BIC criteria, but unfortunately it doesn't work with glms (I am using R) and I've heard it should not be used if you are not using a OLS method. Might be a rumor... At last a personal question: What would you recommend to use? AIC or Cross Validation? The AIC and standard error measure different things, and if you are trying to minimize standard error, a cross-validation approach may be better to use. Another alternative is the Bayesian information criterion (BIC), which is more parsimonious than the AIC. Also, here's a good article comparing the relations between various evaluation metric for supervised machine learning: Data mining in metric space: an empirical analysis of supervised learning performance criteria. As I understand it, AIC and cross validation (at least, leave-one-out cross validation) are asymptotically equivalent, therefore your suggesting that the OP employ cross validation boils down to ignoring the uncross-validated SE and go with the AIC instead. I'd also suggest that your statement that BIC is more parsimonious than AIC needs to be expanded to clarify that BIC will typically choose simpler models than AIC, but that this by no means implies that the models that BIC chooses are "better". BIC fundamentally assumes that the "true" model is amongst the models under consideration, which is probably a stretch for most circumstances. AIC on the other hand simply attempts to minimize future error prediction (hence the connection to cross validation) and makes no claims regarding the "truth" of the selected model. Thanks! This is a good hint! I'll keep myself busy in studying cross validation! I know the BIC criteria, but unfortunately it doesn't work with glms (I am using R) and I've heard it should not be used if you are not using a OLS method. Might be a rumor... At last a personal question: What would you recommend to use? AIC or Cross Validation?
STACK_EXCHANGE
8 Best Python Image Manipulation Tools Want to extract underlying data from images? This article lists some of the best Python image manipulation tools that help you transform images. Image by Editor In today’s world, data plays a vital role in every industry vertical. Images can be one of the sources of extracting data. An image can be defined as a matrix of pixels, and each pixel represents a color that can be treated as a data value. Image Processing comes in handy to uncover underlying data from any image. It helps you extract, manipulate, and filter data from an image. The main objective of image processing is to uncover some valuable information from images. There are various applications of image processing, such as image sharpening, image restoration, pattern recognition, video processing, etc. Most image processing applications come under data analysis and data science. And when it comes to data analysis, the only language that comes to our mind is Python. It is also the most preferred language for image processing because of its extensive set of libraries, which makes it very easy for developers to perform complex operations using simple lines of code. Let’s have a look at some of the Python libraries which are primarily used for image processing. 8 Best Python Image Manipulation Tools Here is a list of the best Python libraries that help you manipulate images easily. All of them are easy to use and allow you to extract the underlying data from images. OpenCV (Open Source Computer Vision Library) is a popular Python Data Visualation library. It is an open-source library that is available for various programming languages, including C++, Java as well as assembly language. This library was developed by Intel using the C++ programming language, and it was designed for real-time computer vision. It is ideal for executing computationally intensive computer vision programs. As OpenCV is a third-party library, we can install it for our Python environment using the Python pip package manager tool. pip install opencv-python # import opencv import cv2 # Read the image image = cv2.imread('tesla.png') # grayscale the image gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow('Original Image', image) cv2.imshow('Grayscale Image', gray_image) cv2.waitKey(0) cv2.destroyAllWindows() 2. Pillow (PIL) Pillow is another popular Python image processing library. It is the most basic image processing library that every beginner can start with. It is also known as PIL, which stands for Python Imaging Library. PIL library comes with different file formatter extensions that provide powerful and complex features to perform image processing. If we compare PIL with OpenCV, PIL is a lightweight library with fewer features, making it easy to learn and handle for a new Python developer who has just entered the realm of image processing. PIL is also a third-party open-source library, and it can be installed using the pip install command. pip install pillow GrayScale an Image in Python using Pillow from PIL import Image with Image.open("tesla.png") as im: #show the original image im.show("Original Image") #convert into grayscale grayscaleImg = im.convert("L") #show the grayscale image grayscaleImg.show() 3. Scikit Image Scikit Images is a scientifically inclined Python image-processing library. It is designed to process images using the Numpy and Scipy libraries. It includes various scientific algorithms, such as segmentation, color space manipulation, analysis, morphology, etc. This library is written using Python and C programming languages. It is available for all popular operating systems, such as Linux, macOS, and Windows. scikit-image is an open-source library, and we can install it using the pip install command. pip install scikit-image GrayScale an image using the scikit-image library from skimage import io from skimage.color import rgb2gray # way to load car image from file car = io.imread('tesla.png')[:,:,:3] #convert into grayscale grayscale = rgb2gray(car) #show the original io.imshow(car) io.show() #show the grayscale io.imshow(grayscale) io.show() NumPy is the most basic Python scientific computing library. It is famous for introducing multidimensional arrays or matrices in Python. It is a dedicated scientific computing library. In addition, it comes with extensive mathematical features like arrays, linear algebra, basic statistical operations, random simulation, logical sorting, searching, shape manipulation, etc. Again to install NumPy, we can use the pip install command. pip install numpy Grayscale the image using numpy import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg #load the original image img_rgb = mpimg.imread('tesla.png')[...,:3] #show the original image plt.imshow(img_rgb) plt.show() #convert the image into grayscale img_gray = np.dot(img_rgb,[0.299, 0.587, 0.144]) #show the grayscale image plt.imshow(img_gray, cmap=plt.get_cmap('gray')) plt.show() Similar to Numpy, SciPy is also a scientific computational library. It has more features than Numpy because it is built as an extension of the NumPy library. Scipy provides high-level and complex commands and classes for data manipulation and data visualization. It covers a wide range of data processing tools. Also, it supports parallel programming, data access from the web, data-driven subroutines, and other mathematical features. To install the SciPy library, we can take the help of the Python package manager CLI tool, pip. pip install scipy Convert an image in grayscale using scipy from scipy import misc,ndimage from matplotlib import pyplot as plt import numpy as np img=misc.face() #show original image plt.imshow(img) plt.show() #grayscale using gaussian blur filter grayscale=ndimage.gaussian_filter(img,sigma=2) #show grayscale image plt.imshow(grayscale) plt.show() Mahotas is yet another Python computer vision library that can perform various image processing operations. It is designed using C++, and it includes many algorithms to increase image processing speed. Also, it uses the image in a matrix using the NumPy array. Watershed, convex points calculations hit & miss convolution, and Sobel edges are the main features available in this library. Mahotas is an open-source library and can be installed using the following terminal command. pip install mahotas Convert the RGB image to grayscale using Mahotas import mahotas from pylab import imshow, show #read the image img = mahotas.imread('tesla.png') #show original image imshow(img) show() img = img[:, :, 0] grayscale = mahotas.overlay(img) #show grayscale image imshow(grayscale) show() SimpleITK is a powerful toolkit for image registration and segmentation. It is built as an extension of the ITK toolkit for providing a simplified interface. It is available in different programming languages such as Python, R, C++, Java, C#, Ruby, TCL, and Lua. This library supports 2D, 3D, and 4D images. The image processing speed of this library is very high compared to other Python image manipulation libraries and frameworks. pip install SimpleITK Load and show an image using SimpleITK import SimpleITK as sitk import matplotlib.pyplot as plt logo = sitk.ReadImage('tesla.png') # GetArrayViewFromImage returns an immutable numpy array view to the data. plt.imshow(sitk.GetArrayViewFromImage(logo)) plt.show() Matplotlib can also be used as an image processing library, although it is a data visualization library. It is generally used to plot the numpy array data, but it can also read the image data represented by NumPy arrays. We have already used the Matplotlib library in the above libraries to show and plot the images. Matplotlib can be installed using the following simple command. pip install matplotlib # importing libraries. import matplotlib.pyplot as plt from PIL import Image # open image using pillow library image = Image.open("tesla.png") #show original image plt.imshow(image) plt.show() # grayscale the image plt.imshow(image.convert("L"), cmap='gray') plt.show() Here ends our list of the best Python image manipulation tools. Among these eight libraries or tools, the most used Python image manipulation or processing libraries are Pillow and OpenCV (SimplICV in some specific cases). If you are thinking of building a project related to image processing, such as identifying objects or color manipulation, consider using the OpenCV library because it is a huge library with lots of advanced features. The other libraries also support some image manipulation or processing features but are not that efficient. Vijay Singh Khatri Graduate in Computer Science, specializing in Programming and Marketing. I am very fond of writing tech articles and creating new products.
OPCFW_CODE
Why can not I restart/shutdown? When I shutdown/restart get a black (shell - like) complete - screen with some huge message claiming things like: ubuntu 10.10 [129.171175] Restarting system. eco nds ... [OK] ... Unmounting weak filesystems ... [OK] will now restart Then absolutely nothing takes place and also I need to literally strike the reset switch. In enhancement to what Delan recommended, as a whole you need to absolutely attempt various values for the reboot= boot parameter ; I would certainly recommend reboot=b specifically, because that is one of the most usual one for equipments to require. Below is the comment from linux/arch/x86/ kernel/reboot. c with the feasible values : /* reboot=b[ios] | s[mp] | t[riple] | k[bd] | e[fi] [, [w]arm | [c]old] | p[ci] warm Don't set the cold reboot flag cold Set the cold reboot flag bios Reboot by jumping through the BIOS (only for X86_32) smp Reboot by executing reset on BSP or other CPU (only for X86_32) triple Force a triple fault (init) kbd Use the keyboard controller. cold reset (default) acpi Use the RESET_REG in the FADT efi Use efi reset_system runtime service pci Use the so-called "PCI reset register", CF9 force Avoid anything that could hang. */ The kernel has a variety of so - called "quirks" for certain equipments that call for the BIOS reboot method, yet like any kind of equipment quirks data source the opportunities are that it is missing out on a couple of. Your computer system might be just one of the ones that is missing out on. If you locate that reboot=b continually solutions this for you, after that please run 'ubuntu - bug linux' to report a kernel bug requesting for this to be made the default for your equipment. You can make this adjustment either on the GRUB command line (hit 'e' on the pertinent boot access and also most likely to completion of the linux line), or, to make it irreversible, modify /etc/default/grub and also transform the GRUB_CMDLINE_LINUX line, taking care to place reboot=b (or whatever) inside the quote marks. Sometimes the reactivate does not fairly function effectively. As an example, when making use of Ubuntu on Apple computer systems, you have to add reboot=pci to your boot flags to reboot effectively, without holding on the reboot message like your computer system is. I'm not claiming that your computer system is Apple, yet that boot flag might aid.
OPCFW_CODE
Hyper-V Program Manager Windows Virtual PC only officially supports Windows XP, Windows Vista and Windows 7 as guest operating systems. Thankfully it has great compatibility and can run many operating systems that are not officially supported. I recently needed to setup a Windows 98 virtual machine for my wife – who has some genealogy software that will not even run on Windows XP. To do this I created a new virtual machine and configured it with 64mb of RAM and a 16GB virtual hard disk. I was then able to install Windows 98 with no real problems: Some things to be aware of when doing this: After installation both networking and sound work correctly – but the video is kind of “sucky” and you need to capture / release the mouse whenever you use the virtual machine. Luckily you can address both of these issues by installing older virtual machine additions in the virtual machine. Doing this will give you: But you will not get: But how do you do this? The trick is to extract the old virtual machine additions out of a previous product. In my case I decided to get the virtual machine additions out of Virtual Server 2005 R2. To do this what you will need to do is: At this stage you should start up your Windows 98 virtual machine and login. Then attach the VMAdditions.iso file to the virtual machine. The virtual machine additions installer should start automatically inside the virtual machine: After this you will need to reboot the virtual machine. With all this in place – some parting notes that I have are: You should be able to use a loopback adapter to create a private network between your physical computer and your virtual machine. You can read about how to do this here: blogs.msdn.com/.../477195.aspx First off I wanted to thank you for this info in advance ok I did exactly what you said...and attached the .iso to the virtual machine. And no dice...it did NOT automatically load. I had to manually choose an exe file that loaded it...then it asked to reboot. I did and viola...nothing. I still have no integration components and can't access it at all. Any ideas? hey i am trying to do this because i want to play an old game i can't on xp so i wanna keep all my stuff on C but this thing is formating it.. and i don't know how to install it without the formating my c drive thnx in forward Hi Ben. Great article. One question: should i be selecting a fixed VHD rather than a dynamic one? I only ask because after what appears to be a successful Win98 install, the VM reboots and then hangs on the Win98 splash screen. A further reboot and i get a message along the lines of no boot device/OS found. I've tried with both 64Mb or RAM and 128Mb. Thanks. Yeah,but i using Windows 98 when i age 7. Hi, i followed the steps you listed above to increase the video card memory, but when i used the code in the prompt windows, the files that were on my work directory seemed to not be affected by the commands written in prompt even if there were no error messages after typin enter. So i was non able to extract the content of the setup.exe file. I tried to use winrar to extract the files but i received a "damaged archive" error message even if the setup file is not damaged, in fact if I double click on it it starts without problems Do you have any suggestion? Thanks in advice. Where to find it for windows 98 virtual pc?? How can i uninstall 98? do i have to unistall virtual pc first? cannot see the iso file after doing the command prompt? 1 - Thank you for this detailed article. 2 - Virtual Additions from Virtual PC 2007 SP1 220.127.116.11 install and work just fine. Now i just need to work out how to set networking up and running... Just thought I'd contribute a little- I also found this tutorial helpful, but an easier method is: download a program such as virtual box/virtual pc or whatever, and choose to run an existing VMC. At this point you will be given the option to browse your computer and select a .vmc file for loading. You can find a premade VMC with windows 98 installed readily available for downloading on the internet- I am including one example link, though it is not hard to find .vmc files on a file-sharing site. Hope this helps. How do I set up a virtual machine in Windows XP to rum 98 I just installed the virtual machine but I dont know how I can install windows 7 that was already there and the programs I need, I am running a Win98 virtual machine so I can run old versions of CorelDRAW and FamilyTreeMaker that I don't want on my physical (Win7) machine. I can get files to the virtual machine by creating and mounting ISO files, but how do I get files from the virtual machine back to my physical machine? In the past (with Virtual PC 2007), I could just drag-and-drop files between the virtual and physical computers, but this no longer works with WinVPC. cnelson> "how do I get files from the virtual machine back to my physical machine?" Benjamin suggested a way with a private network (to Stuart). There is another way, and that involves using a Win XP VM as an intermediary, since it can "see" the physical machines' hard drive(s). It can also see other virtual hard drives if you add them. 1. With the XP VM *shut down*, you go the the XP .vcmx file C:\Users\meTheUser\Virtual Machines (Where "meTheUser" is replaced by the user log in you are using on Win 7.) 2. Right click "settings" on that .vcmx file 3. Click down to "Hard Disk 2", supposing that you have not already added that. ("Hard disk 3" is there too, if you need it.) 4. Now, in the right-hand panel, you click the button for "Virtual Hard Disk" and point to your Win 98 .vhd file. Mine, for example, is: C:\Users\gwhite\AppData\Local\Microsoft\Windows Virtual PC\Virtual Machines\w98se.vhd Your Win XP VM can now "see" your Win 98 VHD. 5. Make sure your Win 98 VM is shut down. 6. Start up your Win XP VM. Once it is started, you should be able to find the Win 98 hard drive. Since the XP VM can see both the Win 98 VHD and your physical Win 7 hard drives, you can literally copy files between them. Note: You cannot have both VM's running at the same time to see the same VHD. There is no true sharing with this method. But it is straight forward, I suppose.
OPCFW_CODE