Document
stringlengths
395
24.5k
Source
stringclasses
6 values
A project is only visible for the creator, the user with whom the project was shared and the administrator of the organization. A folder is visible for the Administrator and all users. |Download, save, open a project||x||x| |Upload a project||x||x| |Move a project||x||x| |Publish a project in eVIEW||x||x| |Open a project in eVIEW||x||x| |Generate a project for an older platform version||x||x| |Share a project with an internal user||x||x| |Share a project with an external user||x||x| |Share a project; specify a period||x||x| |View all projects within the organization||x| |View project details||x||x||x||x| |View project detail "Shared with"||x||x||x| |Download project document||x||x| |Download project image||x||x| |View project document||x||x||x| |View project image||x||x||x| |Download, save, open a project||x| |Upload a project||x| |Move a project||x| |Publish a project in eVIEW||x| |Open a project in eVIEW||x| |Generate a project for an older platform version||x| |Share a project with an internal user||x| |Share a project with an external user||x| |Share a project; specify a period||x| |View all projects within the organization| |View project details||x||x||x| |View project detail "Shared with"||x||x| |Download project document||x| |Download project image||x| |View project document||x||x| |View project image||x||x| Supported file formats Projects which are managed in eMANAGE, must be located in the format of a backup file *.zw1. Maximum project size Currently projects with a maximum size of 2 GB can be uploaded. Download, save and open a project Projects which you open from eMANAGE in the platform are downloaded and saved locally on your computer beforehand and are only then opened in the platform. This means each modification you perform on the project is local and can therefore be performed offline. You can also download projects in the browser and open them at a later point in the platform with and without Internet connection. As soon as you have finished your modifications, you can just upload the project again and the updated project is available again for those users with whom you have shared the project. How to download, save and open projects is described here. Upload a project If you upload a project with your modifications to the same folder from which you downloaded the project, the file in eMANAGE is overwritten completely during the upload. Ensure beforehand that no other user with whom you share the project is working on the project. If another user is working on the project and uploads the project after you, your file is overwritten completely. Note therefore the modification date and the name in the details and coordinate with the other users of the project. If you have published this project in eVIEW, you have to publish the updated project again in eVIEW. See Publish a project to eVIEW and open it. How to upload projects is described here. Share a project With the eMANAGE role "Designer" you can share your project with other users and therefore give them the possibility to download the project, edit and upload it again. Further information on the role "Designer" is available here. The main administrator of an organization, or users with the eMANAGE role "Admin", can invite a new user to the organization during the sharing of a project or master data, and assign them permission to the project or master data in eMANAGE. The user receives an invitation via e-mail both for the organization as well as an invitation for eMANAGE. Users without the eMANAGE role "Admin" can only share projects or master data with other users within the organization. Further information on the role "Admin" can be found here. Projects, master data or files shared with you can also be shared with another user if you have the permission "Edit". If your period of use for projects or master data is limited, the user you invite can also access projects or master data only during this period. You can also specify the period for the invited user. However, it has to lie within your period of use. How to share projects is described here. Move a project You can move a project into another folder using drag&drop. However, no project of the same name may be located in the target folder.
OPCFW_CODE
When readng through trailhead salesforce I came across a line which I am unable to digest. The line says - "You can’t set profile object permissions for a detail record". What does this mean? In my free developer org, I went to setup->profile->any one profile->Custom object permission, Is that where I should be looking to understand this? This refers to objects which are related by the master detail relationship. In case of such relationships we have two objects - parent ( usually referred as Master) and child(referred as detail). When a master detail relationship is created, all the security features that are applied on the parent(Master) object are inherited by the child(Detail) object. For eg: if a user has certain set of permissions on parent object then he will have same set of permissions on child object. You cannot specify the permissions for child object. I hope I am able to clarify your question. I think I figured out what this statement meant. I think it meant - The permissions for being able to create, edit or delete a child record depends on permissions set on master object. If you have "read" permissions on Master object, you can go ahead and set read, create, edit and delete permissions on Child object. If you dont give "read" permissions on Master object, then you wont be able to give any permissions on child object. I tried below in my dev org. Scenario 1: I had read permissions on Master object and read, create, edit and delete on child object. - When i removed read permissions on master object, all permissions on child object were automatically removed. Now i tried giving read and create permissions on child object, with no permissions on master object. - I was expecting SFDC to throw an error, but surprisingly it allowed me to do that. But what it also did was, it automatically enabled read permissions on Master object. Thus, from the above, i can say, what salesforce states is correct. You need to provide atleast read permission on master object for being able to set permission on child object.(Which indirectly means- "You can’t set profile object permissions for a detail record.") I'm wondering if they mean that you can't set the profile object permission for the master field on the detail record. I am looking at a Profile at a Custom Object that is the detail (Line Items) to the Master record Invoice. I cannot modify the access to the field that relates to the Master Invoice as seen in this image, the field is grayed out and checked since it is required by default as noted in the Trailhead unit. This creates a special type of relationship between two objects (the child, or detail) and another object (the parent, or master). This type of relationship closely links objects together such that the master record controls certain behaviors of the detail and subdetail record. In a master-detail relationship, the ownership and sharing of detail records are determined by the master record, and when you delete the master record, all of its detail records are automatically deleted along with it. Master-detail relationship fields are always required on detail records. It might also be an error, you should submit your question as feedback on Trailhead. Added 2 screenshots -> one with and one without the enhanced profile editor. (Click on Setup -> Profiles -> One of your Custom Profiles -> Object Settings or Scroll Down to Custom Object Permissions... Also note that I have Enabled Enhanced Profile User Interface under Setup -> User Interface)
OPCFW_CODE
[Feature]: Improving the social media pulse section on home page Is your feature request related to a problem? Please describe. Describe the solution you'd like The social media pulse section on the home page currently includes only youtube videos. We require our latest tweets, insta posts and LinkedIn posts to also show up in this section. Please describe your potential solution along with your assignment request. Any creative ideas to this particular section are most welcome. Describe alternatives you've considered No response Additional context No response Hi @pradeeptosarkar , I would like to contribute to this issue. I've checked the code of SocialMedia component and the following is my conclusion: Youtube video embed links are hard coded into the app. In a similar way, linkedin post, tweets and instagram posts can be embedded using the embed url. However, it can be simplified. We can store each post/video id in a list dedicated for each social media embed url, store the static part of the url in the codebase and concurrently join the ids with their respective social media url and render them. For the above to happen either you need to provide me with the linkedin, instagram and twitter post embed code list and I can start integration or I can integrate the barebones of this concept first. Lastly, if you want a more advanced and more automated approach, an automated webdriver can be used to scrape the required embed urls from the account so you don't have to manually do that, though it can violate community rules for social media. P.S : The last one is just an idea and may not be required. You can create a dedicated CMS for the same, update it with embed urls anytime a post is done and it'll reflect on the dynamic site. Headless cms solutions like Strapi best solve these kinds of problems. Great @theDevSoham you can work on this one mate till point no. 3. Let me know what you would need. Cheers Thanks @pradeeptosarkar . I would ask for embed urls once I'm finished with this implementation. Great @pradeeptosarkar I'm done with the implementation, can I please get the linkedin, instagram and twitter post embed urls? I would integrate that in the website now. Hey @pradeeptosarkar are you there? Yeah @theDevSoham putting the links here shortly 🚀 11 New Chapters, One Vision!We’re excited to announce nameSpace has expanded to 11 prestigious institutes! 🌟 Want to start a nameSpace chapter in your college? Apply now: https://t.co/NNvdZpdX6q#nameSpace #TechLeadership #Innovation pic.twitter.com/UP63bs125w— nameSpace Community (@namespacecomm) September 28, 2024 🔥✨ THE GRAND COMEBACK: TechXcelerate 3.0 IS BACK! ✨🔥After two blazing editions, we’re thrilled to announce TechXcelerate 3.0! 🚀 Join us from October 14th to November 20th, 2024 , for a transformative journey that will elevate your tech skills! pic.twitter.com/YGWLdTF5b9— nameSpace Community (@namespacecomm) September 21, 2024 View this post on Instagram A post shared by The nameSpace Community (@namespacecomm) View this post on Instagram A post shared by The nameSpace Community (@namespacecomm) @pradeeptosarkar Thank you, I'll revert back shortly with screenshots @pradeeptosarkar Thank you, I'll revert back shortly with screenshots great thanks @pradeeptosarkar I'm attaching a screen recording please have a look https://github.com/user-attachments/assets/1b7753b9-9208-4bdb-b3a5-bff66cc3cd57 great @theDevSoham thanks for contributing. closing this issue as resolved @pradeeptosarkar Thanks a lot. Happy hacktoberfest! Cheers✨✨.
GITHUB_ARCHIVE
Senior Software Engineer (Dotnet) Proficient working knowledge of SQL, PL/SQL and Cosmo DB. Possess strong understanding in configuring and troubleshooting of active Directory. - Experience with object oriented technologies and Microsoft Web technologies - Expertise in Azure Infrastructure Management (Azure web role, Worker role, SQL Azure, Azure Storage, Azure Service Bus). - Involved in developing the Azure Solution and Services like PaaS and IaaS. - Solid undersatnding of working with Front end technologies such as Angular, React.js and vue.js as well. - Created deployment packages for Applications using Visual studio.Net Startup Project, which involves creating native image of an assembly, installing an assembly in Global assembly cache (GAC) - Worked extensively with Data Adapter, Dataset, Data reader as a part of ADO.NET to access and update database. - Extensive experience in SQL Server Database design, Database maintenance, developing T- SQL queries, stored procedures, and triggers using SQL Server 2000, MYSQL, Cosmo DB. - Good working knowledge in designing Use Case, Class, Sequence, Collaboration, State, Component, Deployment, Activity diagrams using UML - Extensive experience designing, building, testing and maintaining software applications that are intuitive, free of errors and provide optimum performance Bachelor of Technology in Information & Technology Punjab Technical University, Jalandhar, Punjab Skills & Abilities - Microsoft Technologies: .Net Technologies and Frameworks: Dotnet Core, ASP.NET, ADO.NET, LINQ, WCF, WPF, ASP.NET MVC, Entity Framework and .NET Framework (4.5 / 4.0 / 3.5 / 3.0 / 2.0 / 1.1 /1.0 ) - Programming Languages: C, C++, C#, VB 6.0, T-SQL - Front end Tools: Angular - Scripting Languages: Java Script , VB Script - RDBMS: SQL Server 2000, MySQL, Cosmo DB - Operating Systems: Windows, UNIX, Linux - Development Tools: VisualStudio.NET 2003, Visual Web Developer 2005, - Server Side Management: AZURE, AWS, Docker, Apache, - Version Control: Git (Github, Gitlab, Bitbucket Management & Leadership skills - Have capability to Set a good direction of the team - Have ability to work on feedbacks positively - Good experience to Mento the junior developers - Master in task delegation - Develop autonomous leadership - Scout - Scout app is an IOS application. This gives the ease to the users who are interested in photography and exploring the places. Users can add the photographs of the places with the tricks and tips and can also view the post of other users. They can add the tags according to the location. Once the user has logged into the account, the home/feed screen will contain a mixture of locations posted by people that they follow and locations near the user. For viewing the location of other users post, the user has to unlock the post by buying the bundle packages. Once unlocked, the user can see the location of the post. Unlocked location can also be saved by the user. Technology Stack: DotNet Core, MongoDB, Linq Role: I worked as a team member for developing APIs using Dotnet Core for different operations and activities in the project. - https://beta.growmotely.com/ - Growmotely is a an onl
OPCFW_CODE
/*! * Copyright by Contributors * \file mpi.h * \brief stubs to be compatible with MPI * * \author Ankun Zheng */ #pragma once namespace rdc { namespace mpi { /*!\brief enum of all operators */ enum OpType { kMax = 0, kMin = 1, kSum = 2, kBitwiseOR = 3 }; /*!\brief enum of supported data types */ enum DataType { kChar = 0, kUChar = 1, kInt = 2, kUInt = 3, kLong = 4, kULong = 5, kFloat = 6, kDouble = 7, kLongLong = 8, kULongLong = 9 }; // MPI data type to be compatible with existing MPI interface class Datatype { public: size_t type_size; explicit Datatype(size_t type_size) : type_size(type_size) { } }; // template function to translate type to enum indicator template<typename DType> inline DataType GetType(void); template<> inline DataType GetType<char>(void) { return kChar; } template<> inline DataType GetType<unsigned char>(void) { return kUChar; } template<> inline DataType GetType<int>(void) { return kInt; } template<> inline DataType GetType<unsigned int>(void) { // NOLINT(*) return kUInt; } template<> inline DataType GetType<long>(void) { // NOLINT(*) return kLong; } template<> inline DataType GetType<unsigned long>(void) { // NOLINT(*) return kULong; } template<> inline DataType GetType<float>(void) { return kFloat; } template<> inline DataType GetType<double>(void) { return kDouble; } template<> inline DataType GetType<long long>(void) { // NOLINT(*) return kLongLong; } template<> inline DataType GetType<unsigned long long>(void) { // NOLINT(*) return kULongLong; } } // namespace mpi namespace op { struct Max { static const mpi::OpType kType = mpi::kMax; template<typename DType> inline static void Reduce(DType &dst, const DType &src) { // NOLINT(*) if (dst < src) dst = src; } }; struct Min { static const mpi::OpType kType = mpi::kMin; template<typename DType> inline static void Reduce(DType &dst, const DType &src) { // NOLINT(*) if (dst > src) dst = src; } }; struct Sum { static const mpi::OpType kType = mpi::kSum; template<typename DType> inline static void Reduce(DType &dst, const DType &src) { // NOLINT(*) dst += src; } }; struct BitOR { static const mpi::OpType kType = mpi::kBitwiseOR; template<typename DType> inline static void Reduce(DType &dst, const DType &src) { // NOLINT(*) dst |= src; } }; template<typename OP, typename DType> inline void Reducer(const void *src_, void *dst_, uint64_t len) { const DType *src = (const DType*)src_; DType *dst = (DType*)dst_; // NOLINT(*) for (uint64_t i = 0U; i < len; ++i) { OP::Reduce(dst[i], src[i]); } } } // namespace op } // namespace rdc
STACK_EDU
#if NET40Plus using System.Collections.Specialized; using System.Web; namespace Navigation { /// <summary> /// Implementation of <see cref="Navigation.IStateHandler"/> that builds and parses /// navigation links for an MVC <see cref="Navigation.State"/> /// </summary> public class MvcStateHandler : StateHandler { /// <summary> /// Gets the data parsed from the Route and QueryString of the <paramref name="context"/> /// with the controller and action Route defaults removed /// </summary> /// <param name="state">The <see cref="State"/> navigated to</param> /// <param name="context">The current context</param> /// <returns>The navigation data</returns> public override NameValueCollection GetNavigationData(State state, HttpContextBase context) { NameValueCollection data = base.GetNavigationData(state, context); data.Remove("controller"); data.Remove("action"); if (context.Request.QueryString["refreshajax"] != null) { data.Remove("refreshajax"); data.Remove("includecurrent"); data.Remove("currentkeys"); data.Remove("tokeys"); data.Remove("navigation"); } return data; } /// <summary> /// Returns the route name of the <paramref name="state"/> /// </summary> /// <param name="state">The <see cref="Navigation.State"/> to navigate to</param> /// <param name="context">The current context</param> /// <returns>The route name</returns> protected override string GetRouteName(State state, HttpContextBase context) { return "Mvc" + state.Id; } } } #endif
STACK_EDU
Why does A VB.NET DLL Addin for Inventor compiled on an Intel machine work on the Intel machine but not on AMD? Im developing continually an inventor Addin in VB.Net in visual studio 2019 , i have multiple machine different builds , but once in a while some machine just don't want to load the Addin f.e. the current version i have now works on all machines except one AMD machine . When i compile the same project with the same settings no changes at all on the AMD machine with ANY CPU build option it runs without problem . When i do it on my primary developing machine it does not work on this other computer. I checked dependencies with dependency walker , i do not get any error messages . When i make breakpoints in DEBUG mode and debug dll compilation in the first methods called in the "StandardAddInServer.vb" file it does not reach it on the AMD machine when it is compiled on the Intel machine. But in reverse it runs smoothly . I have no idea what this could be and I'm only speculating that is has to do with AMD/Intel difference of the machines . Any help would be appreciated to come to a solution. Inventor 2018.3.7 Professional Build 287 is on the Intel i7-4771 machine Visual Studio Community 2019 16.3.9 , .NET 4.8.03761 Inventor 2018 Professional build 112 is on the AMD Ryzen 7 3700X machine Visual Studio Community 2019 16.7.2 .NET 4.8.03752 Any more information which could be helpful will be provided gladly . Well.... i tought lets make the maschines identical in regards to software .. i started installing inventor updates one by one at Version 2018.3.1 the Addin magically was working .... so i hope i help somebody if the , addin automatically unloads without any error , its probably autodesk inventors fault ... You could try with a former framework version like 4.6.2. That should fix the problem. It could be that the newer service packs of Inventor can deal with addins created with newer .net versions. When 2018 Inventor was released, .net 4.8 was not available. Was Albert wrote is true as well. If the architecture of your dll doesn't meet that of your host process, it can't be loaded. A 64bit process can use 64bit dlls only whereas a 32bit one can deal with 32bit dll only. A .Net project won't be compiled by Studio into fully functional machine code (nowadays you can do this as well), but intermediate code only, which will be compiled finally by the .net framework on the target machine. If anyCPU is selected, you don't have to provide different versions for different hardware because of this behaviour. Are you using ANY CPU ,or are you forcing the project to a given bit size? If you don't set this, and leave it at ANY CPU? Well, if you launch the application from Visual Studio (such as installing on that target machine), then the application will run as x32 bits. HOWEVER if you launch the program from the windows command line (command prompt). Well, if you use the x64 bit command prompt, you get a x64 bit running - in-process program. If any of your external .dll's or libraryes are x32 (or not compiled with ANY cpu), or you using any un-manage code libraies (say like ghostscript or some such)? Then your program will run (or try to) as x64 bits. However, if you launch a x32 bit windows command prompt (there are two of them - one is x32, and one is x64). So, if you launch the .net exe (your program) from a windows x32 bit prompt, then your program will run in-process as x32 bits. So, be careful here. Using ANY CPU from Visual Studio will ALWAYS get you a x32 bit program - including debugging. But RUNNING the program (launching outside of VS) will not always be x32 bits. And the above behaviours seems to explain your issues/problems on the AMD machine. It was NOT that you installed VS on that machine, but that using VS to launch your program in fact FORCED it to run as x32 bits. Bottom line: do NOT use ANY CPU unless that is exactly what you need, and that you are VERY sure any external libraries are also compiled as ANY CPU, or in fact that any external libraries doe NOT USE any un-managed code. All in all? I would config your project to force run as x86 ALWAYS, and thus you not get some surprises of code not working. I doubt very much that the AMD CPU is the issue, and installing VS only worked because of forcing your project to run as x32 as opposed to x64 bits. It has zero to do with AMD here.
STACK_EXCHANGE
AWJ wrote:According to nocash, during Mode 7 rendering, the values read from 2134-2136 constantly change based on the Mode 7 calculations. This doesn't seem to be emulated by bsnes at all. Has anyone investigated exactly how this works (particularly timing-wise)? No. I know that it's using it for the mode 7 multiplication calculations, but I have not emulated it. I don't even believe I'm caching the mode 7 multiplications themselves yet (it should only be adding things per-pixel.) The PPU multiplication updates are like one piece of a large jigsaw puizzle. Trying to tackle it alone would just make a mess of things. If we were to form proper cycle stepping of all PPU operations, $2134-2136 would naturally fall seamlessly into place. I desperately need jwdonal's test ROMs and PPU timing findings to improve bsnes further. But ... I have three other emulation cores to keep me busy in the mean time, so I guess it's no great rush. jwdonal wrote:You'd want to show that at least one game depends on reading a very specific byte value in order to justify all that coding/testing effort. Given that BSNES doesn't implement the behavior, and every SNES game already runs on BSNES, then I think you might have a hard time finding that justification. I emulate all kinds of things that no games rely on. It's not so much about making regular games run better, it's about ensuring someone in the future doesn't decide to use those registers as "free, fast multiplication" while also using mode 7, and then end up surprised when their game breaks on real hardware. I've been on the other end of that (with different limitations) ... it's not fun. I don't think most emulators should bother. Especially not Snes9X/bsnes-performance/bsnes-balanced. But we should ideally have one emulator that a person uses as a final test before releasing their games, and that should emulate anything that is humanly possible. koitsu wrote:I also now sympathise with byuu having literally 3 separate CPU/PPU cores to maintain. What a fucking nightmare. After probably 4+ years of maintaining them all (and building profile-optimized binaries for all three, in both 32-bit and 64-bit configurations), I finally threw in the towel and discontinued the performance/balanced cores. I'm very appreciative that someone out there understands why I had to do that. It was a very difficult decision for me, and one that's obliterated most of the small userbase I still had left. By the way, if you guys want an even bigger rabbit hole to chase ... the regular CPU mul/div stuff isn't 100% emulated either. As most of you know, it's a multi-cycle process of updating the math computations one bit at a time. And thanks to blargg, we know the algorithms and have that emulated. But it is possible to start a division during a multiplication, and even easier to start a multiplication during division. The result is that both run simultaneously, only they share some transistors along the way. The resultant computations are a complete mess, and even blargg was unable to make sense of them. This is currently unemulated, and probably the easiest way to quickly detect bsnes versus real hardware today.
OPCFW_CODE
Can Google predict stock market performance? Can Google predict the stock market? The short answer is "No". The slightly longer and more interesting answer is "No... not yet." At the University of Warwick Business School along with academics from Boston in America and UCL in London they're working on using data from Google to predict how the stock market will move. This research now published in Nature grew out of earlier work from 2010 which discovered a link between the number of people Googling specific companies like Apple or Microsoft and the behaviour of the company share price. From that the researchers wondered if you could predict the behaviour of the stock market itself using other words apart from company names. The team looked at the behaviour of 98 words people searched for on Google and then used them to programme a simple financial strategy. I should say that all of this was done using data from 2004 to 2011 rather than trying to predict the stock market "live". Some of the words were obviously financial like "debt", "stocks", "portfolio" and "inflation" while others were not, such as "colour", "marriage" and "garden". The researchers discovered that some words were more likely to predict what the stock market would do than others. They then compared their words with language commonly used by the Financial Times and discovered the more popular the word was in the FT the more likely it was to predict the market. In other words it's the more financial words that provide the best guide, indeed a strategy based on Google searches for the word "debt" outperformed the stock market by around 300%. Of course all of this is based on analysing data from the past, the big question is can we do this with live information and make enormous amounts of money? Well not surprisingly that's what the researchers are looking at next. The difficult bit is discovering the key search words that are becoming more or less important and which will then be useful as predictors. To do that will take a lot more information about what people are interested in online. Google only release a limited amount of information on what people are searching for but there are plenty of other data rich sources that might help from Twitter to Wikipedia. Of course everyone from banks to governments are very interested in this idea. Not just to predict the stock market but things like the potential size of flu outbreaks or the chance of civil unrest and rioting. But ironically if the team at Warwick do crack this and find a way to predict how the stock market and by extension all of us behave, if they then alert us to the fact then there is a good chance we will modify our behaviour and so the team will have to start again.
OPCFW_CODE
Interface Fragment Types Not Getting Spread In Queries Which packages are impacted by your issue? @graphql-codegen/client-preset Describe the bug Codegen doesn't spread the fields correctly when: The fragment belongs to an interface The spread is happening in a subtype of the interface Example: Employee and Customer are both subtypes of BasePerson fragment BasePersonFragment on BasePerson { id name } query PeopleQuery { people { ... on Employee { ...BasePersonFragment } } } The data will not include the id and name fields from the fragment console.log(data); data belonging to the PeopleQuery, will log {people: [{__typename: "Employee"}, {__typename: "Customer"}]} A minimum reproducible example is provided here: https://github.com/abir-taheer/minimum-reproducible-graphql-codegen-interface-fail In that repo, for convenience, codegen will be run after the yarn command runs in the postinstall script running yarn devfrom the mono repo root folder will run the web app and graphql server to demonstrate the query mismatch The codegen config file is here: https://github.com/abir-taheer/minimum-reproducible-graphql-codegen-interface-fail/blob/main/apps/graphql/codegen.ts typedefs: https://github.com/abir-taheer/minimum-reproducible-graphql-codegen-interface-fail/blob/main/apps/graphql/typeDefs.ts The query is here: https://github.com/abir-taheer/minimum-reproducible-graphql-codegen-interface-fail/blob/main/apps/web/src/App.tsx Your Example Website or App https://github.com/abir-taheer/minimum-reproducible-graphql-codegen-interface-fail/tree/main Steps to Reproduce the Bug or Issue Most details are mentioned prior commands to run MRE yarn yarn dev Expected behavior Site should be displaying results: John Doe has id $1 Jane Doe has id $2 But it displays: results: has id $ has id $ Screenshots or Videos No response Platform OS: macOS NodeJS: 20.1.0 graphql version: 16.8.1 @graphql-codegen/cli: "^5.0.0" @graphql-codegen/client-preset: "^4.1.0" Codegen Config File import type { CodegenConfig } from "@graphql-codegen/cli"; import { makeExecutableSchema } from "@graphql-tools/schema"; import { ensureDirExists } from "@monorepo/utils"; import * as fs from "fs"; import { printSchema } from "graphql"; import * as path from "path"; import { typeDefs } from "./typeDefs"; const schema = printSchema(makeExecutableSchema({ typeDefs })); ensureDirExists("generated"); fs.writeFileSync("generated/schema.graphql", schema); const webDir = path.dirname(require.resolve("@monorepo/web/package.json")); const webGeneratedDir = path.join(webDir, "src", "generated/"); ensureDirExists(webGeneratedDir); const config: CodegenConfig = { overwrite: true, schema, generates: { // Typescript types for GraphQL backend server "generated/ts-schema.ts": { plugins: ["typescript", "typescript-resolvers"], config: { contextType: "../context#GraphQLContext", }, }, // Typescript types for GraphQL frontend client [webGeneratedDir]: { documents: [webDir + "/src/**/*.ts", webDir + "/src/**/*.tsx"], preset: "client", config: { useImplementingTypes: true, }, plugins: [], }, }, }; export default config; Additional context No response Deeper investigation has revealed that this behavior still occurs even without typegen and I've discovered that this is actually an issue with the Apollo cache. For anyone else who runs into this see this issue: https://github.com/apollographql/apollo-client/issues/7648 Essentially, you have to add the possibleTypes property to the client cache. Changing the cache to this: const cache = new InMemoryCache({ possibleTypes: { BasePerson: ["Employee", "Customer"], }, }); fixed the issue
GITHUB_ARCHIVE
There's a lot of outdated information and confusion for system administrator's out there. One annoying task for many an Administrator has been backing up data in Linux. You don't need any GUI tools such as K3B or GnomeBaker. Both are excellent tools but for veteran command line users working remotely, using the keyboard is a great and possibly automated way to save yourself pain and hassle. At a later date we'll cover how scripting can automatically backup certain files to disc, verify them, catalogue them and even e-mail a report of the whole thing nightly or weekly. It can even remind you the night before to make sure a blank disc has been inserted. You'll need dvd+rw-tools. On Debian variants including Ubuntu you would run: apt-get install dvd+rw-tools On Redhat Enterprise / Centos you would run: yum install dvd+rw-tools In Unix it would probably be the same port/package name, especially on FreeBSD. No drivers or anything else are required at this point. There are two ways of doing it, the traditional way is creating an iso image using mkisofs, but that is a pain and not even a possibility for some overfilled disk systems. Many people believe there is no way to write on the fly, except to create an ISO with the data you'd like to burn. Here's the tradtional way just for the sake of the oldschool: mkisofs -r -o /root/myisoname.iso /var/www/vhosts Now we use growisofs to actually burn the .iso: growisofs -Z /dev/dvd1=/root/myisoname.iso *Replace /dev/dvd1 with the device name of your burner Watch as it burns: 52887552/367337472 (14.4%) @3.9x, remaining 0:59 RBU 100.0% UBU 61.2% 71925760/367337472 (19.6%) @4.1x, remaining 0:53 RBU 100.0% UBU 55.1% 90112000/367337472 (24.5%) @3.9x, remaining 0:49 RBU 100.0% UBU 59.2% 108363776/367337472 (29.5%) @4.0x, remaining 0:47 RBU 99.6% UBU 53.1% 127303680/367337472 (34.7%) @4.1x, remaining 0:43 RBU 100.0% UBU 55.1% 145555456/367337472 (39.6%) @4.0x, remaining 0:39 RBU 100.0% UBU 51.0% 164265984/367337472 (44.7%) @4.1x, remaining 0:37 RBU 99.6% UBU 53.1% 182779904/367337472 (49.8%) @4.0x, remaining 0:33 RBU 100.0% UBU 55.1% 200900608/367337472 (54.7%) @3.9x, remaining 0:29 RBU 100.0% UBU 61.2% 219971584/367337472 (59.9%) @4.1x, remaining 0:26 RBU 97.3% UBU 53.1% 238157824/367337472 (64.8%) @3.9x, remaining 0:23 RBU 100.0% UBU 57.1% There's no reason to use mkisofs and create an isofile, it wastes time and resources. You could have just done this: growisofs -M /dev/dvd1 -R -J /var/www/vhosts *-R and -J are for Rockridge and Joliet extensions so you don't get annoying 8-character long, truncated filenames *Actually my preferred way is this: growisofs -M /dev/dvd1 -R -J -joliet-long -iso-level 3 /filenordir/name -joliet-long gives you much longer filenames -iso-level 3 helps too This is the perfect setup, in both Linux, Windows and a DVD Player the filenames are shown as expected. dvd, rw, unix, linuxthere, outdated, confusion, administrator, task, backing, linux, gui, gnomebaker, veteran, users, remotely, keyboard, automated, hassle, ll, scripting, automatically, disc, verify, catalogue, nightly, weekly, inserted, requirements, debian, variants, ubuntu, apt, install, redhat, enterprise, centos, yum, freebsd, drivers, creating, iso, mkisofs, overfilled, disk, tradtional, oldschool, myisoname, var, www, vhosts, growisofs, z, dev, burner, remaining, rbu, ubu, sensible, isofile, wastes, rockridge, joliet, extensions, truncated, filenames, preferred, filenordir,
OPCFW_CODE
scsi vs 74gb raptor raid What would you guys do. I want Im feeling the need to try something new. I pieced together a cheap scsi u160 setup with 10k 73gb drive for about 140. Or I could go with another raptor and do raid. Looking for the best performance possible for under 150. Quote:The single Raptor would match the performance of the pair of 10K SCSI in RAID0. The Raptor is just so much more tuned toward single-user performance compared to any SCSI drive. My 2x15K RAID0 doesn't beat(if timed by stopwatch) a 2xRaptor setup in intensive daily/gaming usage. Same reason. Are you comparing this bases on u160 or u320? PCI or PCI-x or -e? Quote:The first one, latest gen. U320 10K on PCI-X 133Mhz or PCIe. Mine, 2nd gen. 15K U160 so it's even more lessly tuned toward single-user and just barely keeps up to a Raptor. I'm using PCI-X 66Mhz compared to PCIe. Not comparing STR here, I never compare STR when deciding on performance. Firmware tuning is truely magic! 90% of the time my server is just used as desktop except at LAN where it acts as a leech+game server and the 15Ks shows it random access and multiple-command power. Not that I know of, but since we Hijacked the thread already... I'm in the same boat, I have a server board that lacks ddr2 and the such, and am taking your advise the the Asus workstation pro (am2) is a rip off. So I'm looking into trading off my SCSI system (pci-x) to get over to pci-e cards. LSI is the only one I know of and it will be some time before we HP and gang start putting out cheap dual channel cards. I have a pair of super fast 15ks Seagates and 15k 147 Fujitsu drives, I do have two SAS drives that I picked up at a swamp meet of all places, 80bucks each. Couldnt pass it up, but have no way of using them! Quote:and it will be some time before we HP and gang start putting out cheap dual channel cards. Yeh, good point. Come to think of it all the cheap SCSI RAID cards I've owned in the past came from retired OEM servers! I just looked at the current 2nd hand price of those 18GB 15K and selling them just doesn't seem to be worth it compared to what I paid a year ago even though I bought them cheaper than others at that time. They're like my babies! The price for U160 SCSI RAID cards still looks good here so I'll have to sell them soon before they further degrade in value. So I think I'll just keep on searching for cheap 2nd hand PCIe ones, hoping to go under US$150 here. P.S. ignore the state I'm currently in, w00f
OPCFW_CODE
Expensive JDBC Operation Monitoring now available We are happy to announce that the JDBC Monitoring is now publicly available for all Plumbr users. After five weeks of private beta we gathered enough feedback and evidence that the detection is ready for general availability. During the weeks we monitored billions of transactions and detected thousands of performance bottlenecks in hundreds of different JVMs deployed by 300+ companies. The insights gathered helped us to fine-tune the solution which as of today is available for all Plumbr users. The launched solution is helping operations and engineering to be on the same page by having all the necessary information at your fingertips – with Plumbr monitoring for the expensive JDBC operations you no longer need to do impact analysis using one tool, find the offending query from the database monitoring tools and then manually find the source in your Java code composing and executing the operation. In Plumbr, all this information is monitored and aggregated for you as seen from the following screenshot where you see a situation where Plumbr: - detected an expensive JDBC operation blocking a thread for close to 9 seconds; - confirmed that this was a recurring issue (where in total of 127 times the very same operations stalled for 23 minutes and 31 seconds; - outlined that the wait occurred during the very same SQL query being executed: Equipped with this information, you can quickly triage the issue based on the severity of the problem. The next step is finding the actual root case, where we again equip you with the necessary information: From the above you can see that all the 127 expensive JDBC operations were trying to execute the very same query against the very same datasource. Also, you can immediately drill down to the single line in source code executing the query – in this particular case the culprit was a prepared statement executed by the eu.plumbr.portal.incident.history.JpaProblemHistoryDao.findAccountProblems() method on line 74. In addition to fine-tuning of Plumbr slow query incident reports, the program also revealed interesting facts about the impact that slow queries have on the SLA of business critical applications. Plumbr detected slow queries in 75% of the applications that participated in our private beta program. A third of these applications regularly stalled user transactions for 10 seconds and more because of inefficient queries. As of now, Plumbr JDBC Monitoring is officially supported for all major database vendors (with the notable exception being IBM DB2). We do not intend to stop here – as next step we already see the need for - Improving the usability to adding possibility to include the prepared statement parameters in the root cause. The possibility was removed from the original design due to privacy-related concerns, but we are working to make it possible to enable the parameters logging via opt-in configuration for applications for which it would not pose a security concern. - Increasing the number of officially supported data sources. In addition to currently supported vendors we are working to add support for IBM DB2 and SQLite databases already. - Expanding the detection from single expensive operations to detect situations where multiple queries triggered during a single user transactions will result in poor user experience. Typical example for such situation would be an ORM-introduced N+1 problem. We expect to have results with the aforementioned list already during the next couple of months. In the longer horizon is the ability to expand the detection to other types of databases as well – the world in 2015 is no longer consisting just relational data storages accessed via JDBC and we expect to support all major noSQL vendors as well. Work in this regard is still in its infancy, so clear release dates cannot yet be scheduled.
OPCFW_CODE
It is a common practice nowadays to check website code quality before executing any website code on a live website. There are various reasons for this practice, including reducing risk, better user experience, and also ensuring a higher search engine ranking. This article focuses on some common questions that website owners commonly ask when they first start their search for a good coding software solution. The questions usually focus on such issues as what do I need to check for, how to check the code quality of my website and product? Developers have various standards to check code quality and review. Code review is a complicated process, and every company must follow a code review checklist before conducting one. Code Review to Check Website Code Quality It is becoming more and more usual for development teams to conduct code reviews. Code review templates provide the best way to review code quality. Before merging branches or releasing code to production, developers submit their code for review and feedback to check code quality. Developers make comments on individual lines of code and, in the end, accept or reject suggested changes. Let’s see what is a code review? “It is a joint effort between the reviewer and the author. Aiming for perfection in all aspects, they want to create simple code to comprehend for the next developer. Not one developer is criticizing the other, but rather two developers are working together to create software of far higher quality than each could produce alone.” By David Bolton Things to Do Before the Code Review Process These templates assist minimize the number of mistakes that go into production without question. There are a few things to bear in mind when performing a code audit. These processes should be followed by every team or CTO once the first code version is complete. Before beginning the code evaluation process, it would be beneficial to create a design standard. A better way is to create a code review template… Software performance goals, methods employed, technologies utilized, and the end output should be identified. A Peer code review checklist is a useful tool for checking whether or not you’ve implemented code according to expectations, and if you haven’t, how much it deviates from them. What Is a Checklist? Checklists are used to keep track of important tasks. When developers review source code changes before they are incorporated into the codebase, they utilize checklists, including specified requirements. What Is the Purpose of a Checklist? Checklists help keep things organized. Effortlessly ensure you complete all stages with this simple-to-use tool. “Checklists are a kind of work assistance intended to decrease failure rates by accounting for human memory and attention limitations that may exist. It helps guarantee that a job is carried out consistently and completely. The “to-do” list is a good example.” A code review checklist may be used in a variety of circumstances. Your business may already use checklists as part of its code review process, or you may create one as part of your own efforts to enhance your code reviews. The Code reviewer and code submitter alike may benefit from using a checklist. In order to do this, it is necessary to prepare the code for review, and using a checklist enables the programmer to take a step back and examine their code more objectively before submitting it. A review checklist may be more beneficial for the programmer than the reviewer, in my opinion. Checklist for Code Review Mostly gut feeling is the primary basis for most code critiques. A lack of a clear approach guides their code review. Sometimes, due to this informal approach, some aspects of code review are overlooked or overlooked altogether. Here are the components of a checklist for code reviews: A code review’s most common objective is to improve the code’s quality and make it easier to manage. Code of high quality has little technical debt and requires the least help while creating and managing future code versions. Four criteria should be maintained for improved code quality. - The code is easily readable - The code is easy to test - The code can be debugged - The code can be reconfigured conveniently Any developer should be able to read and understand the code easily. In any case, the code should be simple to test. Use interfaces to communicate between levels in this case. Several methods exist for ensuring manageability: - Look for code that is simple to understand and read, as well as easy to maintain. - Code should be simple to understand and should be formatted properly. - Comment on key steps and instructions to enhance comprehension while excluding comments that impede comprehension. - As a rule, use the proper language and align your code with the appropriate space in your code while writing code. An output test measures a code’s ability to generate output. All test plans and unit cases should be available and executed. Every non-obvious thinking must be addressed in tests. The situation should not be rushed. Checking for comments is one of the things that should be done during a code quality check. Coding comments should be carefully examined to ensure that they are accurate. Instead, it should be removed or remarked on. 4. Validation Errors The user’s ability to enter data is tested. It must also accept any string that is typed into an input box. Your code’s input parameters should be checked. If so, may negatives also be included in the mix? • What is the range of input? • What sort of input is allowed, and how should you act in certain circumstances? Code reusability is important for reducing file size and length, as well as for improving code structure and organization of the code. Duplicate methods or code blocks should be checked for in your program. Consider using classes, methods, and components that are reusable. The following concepts may help make a code more reusable: - Assemble dependencies outside the class and inject them properly within the class - As well, the code should not try to accommodate improbable usage cases. For the sake of making their code future-proof, we’ve all seen writers create extra features that you would never need. In an instant, code that has never been utilized becomes legacy code. - The aim is to strike a balance between code that can be reused and code that isn’t required for the application. Using the DRY Approach Not Repeating Yourself but DRY is one of the first maxims that are taught to new programmers. Or, in other words, do not use the same code or functionality on different parts of your site. Coding reviews are enhanced most effectively when repetitive codes are identified and replaced with reused functions or classes. This may lead to code that is so ambiguous that it is unusable in any of the possible use cases. If a program is easy to understand, then it is easily readable. Being able to understand the code’s inputs and outputs, as well as what each line of code does and how it fits into the greater scheme of things, is what it means to “know your code.” As you go through the code, you should be able to identify the purpose of certain functions, methods, or classes. Often, code is not broken down into small enough parts to be readily read and comprehended. Sometimes this leads to the creation of long classes or methods and functions with too many responsibilities. Good names make it simpler to read the code. Often, developers don’t take security into account while developing code, resulting in security flaws. It’s usually a good idea to utilize a security code review checklist while reviewing security code. For the code to be consistent, it must conform to a particular architectural style across the whole application. Use the previously defined design pattern as a reference for architectural evaluation. The methodology should be documented and the baseline established before any modifications are made to the design. Use a design review template to help you organize your thoughts. The code should, if required, be split into a presentation, business, and data levels, as appropriate. The core design should be consistent with prior software. 8. Speed and Performance User experience and resource consumption are the two aspects through which performance is assessed. How quickly your code executes affects the performance of your users. This may be caused by database searches that take a long time; unoptimized assets; and API queries that need many requests to be completed. Optimizing by 20 percent, which produces 80% of your results, should be your main emphasis and top goal. It isn’t necessary to optimize for speed if it does not impact the user (or your metrics) or is not worth the effort. “The reason Google code review is so quick is due to two major factors. First of all, 90 percent of Google’s code evaluations consist of less than ten files each. A single person writes another 75 percent of reviews.” Ensure that the code you’re analyzing needs any further documentation before you proceed. When a new project is being developed, a readme file should be included that explains why and how to use the project. If new features or tools are added to an old project, you may want to consider upgrading the readme file. If you are evaluating documents, a document review checklist template may be of use to you. Ask yourself, as a software user, whether the user interface is simple to understand. End-users may have more problems if they have difficulty using the software. People rush into the development process too early without a proper UI/API program, which leads to a lot of errors. If you’re still not convinced, work with your team on user interface design. Fault-tolerant code is reliable. User experience is reduced to the greatest degree when things go wrong using reliable programming. Occasionally, assets won’t load, API requests may return 500 error codes, and database entries may be missing from databases. Things will break. This is the foundation on which reliable programming is based. A certain amount of failure is anticipated. With the expectation of everything going well, coding often fails catastrophically. 12. The System’s Capacity to Scale Consider the code’s scalability while assessing it by imagining what would happen if you abruptly loaded it. What happens to a website’s homepage when it gets famous and receives hundreds of requests per second? The user has hundreds of activities on your app. What would happen if they were seen all at once? One hundred people try to buy your product at once after reading about it in the news, and they all fail. This should be taken into consideration while evaluating the code. When your website, app, or service goes down, you’re most likely to find scalability issues. 13. Incorporation of Patterns Determining whether the new code will fit into current patterns that your team has already established is also essential. If your codebase has a style guide, it’s quite probable that it already has its own. This means that you must, at all costs, avoid needless departures from the new code. Everyone has blind spots when it comes to writing code: techniques we don’t investigate, efficiencies we don’t recognize, and elements of the system we don’t completely understand. At least one other developer evaluates a piece of code before it is placed into production. Many development teams feel that code review is worth the wait since it increases code quality. Microsoft and Google, two of the world’s most successful companies and startups employ code reviews to check website code quality. I think it would be helpful to predict the time required for code review checklist completion before beginning. Usually, engineers disregard code reviews because of the deadline and completion date. A more significant check should always be performed before a less critical one. There would be solutions in the code if you run out of time, and some smells would not create a bigger problem in the long run.
OPCFW_CODE
Posted 15 December 2006 - 01:31 PM This new casual online game is ideal for teachers, maths students and parents. For teachers, www.CombinationLock.com provides them with a novel approach to helping students improve their powers of reasoning, deduction and logic. Maths students can play in their own time either ‘solo’ or competing in multiplayer mode against other puzzlers across the globe. For parents, www.CominationLock.com provides a fun way to stimulate their children’s interest in maths and logic. Players are presented with an image of a combination lock which can be unlocked by using clues to enter the correct digits into the reels (from two to six depending on the level of difficulty selected). Clicking on the ‘Add a Clue’ button, allows players to receive additional clues to help them solve the puzzle, which only has one possible answer. In the ‘Solo play’ mode, players complete three games sequentially as the locks are opened. The site automatically stores players' best times and shows how they compare with other players both in the same country and across the world. The best players are also highlighted on the home page of www.CombinationLock.com The Multiplayer version lets puzzlers race to complete the same game, with the same clues at the same time. Once a player has successfully solved the puzzle, the other player(s) can continue to play until they too have finished. CombinationLock.com doesn’t use Macromedia Flash, requires no downloads or plug-ins and is thus instantly accessible without running into firewall problems. No registration is required. Posted 27 September 2007 - 02:49 PM Even those who are rusty at math can use this tutorial-all you need is pen and paper. This "works" on reasoning ability. No prerequisites are required. Quizzes are available after each section, so you can proceed or read again.. Kudos to Chris Caldwell, who created this fun, rewarding tutorial. Edited by Kathy Beckett, 27 September 2007 - 03:19 PM. Posted 13 December 2007 - 08:12 AM What's the shortest line that surrounds the largest area? A line around self, and declaring oneself as standing on the outside. (School hols + Christ Mass almost here=limited forum participation for many weeks) Sounds a bit like mastermind or even cluedo, John. Once one has mastered the 'trick' one should not fail. ie inherent limitations, still, useful. Anything that awakens some form of lateral thinking is good. (IMO) The "Myst" series (no shootemup, just beautiful and complex) (3) of computer games are chockablock with gradually harder and harder puzzles. Some puzzle Classics: One is in a room with two doors. At each door there is a person. A is a xxxx, B is a truthteller, one does not know which is which. Behind one door is a hungry tiger, behind the other freedom. Only one question is allowed to only one person. What is it? Draw a grid of dots three by three. How to draw only four straight lines without lifting the pen that passes through all dots. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
We ensure you to provide plagiarism no cost assignments with excellent content material and supply plagiarism stories freed from cost, to ensure that learners tend not to want to examine the plagiarism proportion individually. An argument consisting of the government summary, marketing strategy, and company description of an entrepreneur along with a systematic SWOT Assessment supporting them. This reminds me which i detest the IBM Method i platform (aka IBM Energy Systems, aka iSeries, aka AS/four hundred).You should not get me Mistaken -- I am confident It is terrific technological know-how. I'm absolutely sure IBM supports quite a few corporations with it and they are pleased (Despite the fact that I do question why a decade in the past This port is just required to be open up When you are connecting into a distant occasion of The combination Expert services provider from Management Studio or even a custom made application. If you just submit the output from a SQL*Moreover SELECT assertion, it is going to take us 5 or 10 minutes to reverse-engineer that, create a Develop TABLE statement, and insert all the information into it. Conserve us that time, and ensure it is simple for us to answer you. Give lots of specific data, and supply an affordable check scenario. This is vital to the productive operation of software courses that problem sophisticated, high-frequency queries. It is very essential in the event the tables to become accessed can be found in remote systems. To start with, to ensure that we update the email correctly, we query Mary’s email from the employees table employing the following Choose statement: Business Project Management a scenario research deciding the advices the project manager is likely to supply towards the PM for the purpose of dedication of early begin/no cost stack. While in the Ensembl project, sequence facts are fed in the gene annotation system (a set of computer software "pipelines" published in Perl) which makes a list of predicted gene areas and saves them inside of a MySQL database for subsequent analysis and Display screen. The 1st set of statements exhibits 3 ways to assign consumers to user groups. The statements are executed via the person masteruser, which is not a member of the user group mentioned in any WLM queue. No query team is about, And so the statements are routed into the default queue. The person masteruser is usually a superuser along with the question group is set to 'superuser', Therefore the query is assigned on the superuser queue. The consumer admin1 is really a member of the consumer group mentioned in queue one, so the question is assigned to queue one. Even though we under no circumstances propose disabling the latter with a manufacturing, the reality is the fact on an import, we do not treatment if the info finally ends up corrupted (we can delete it and import it again). You can find also some selections on specified filesystems to avoid her explanation environment it up. Besides its Web site, Ensembl offers a Perl API[five] (Software Programming Interface) that designs biological objects like genes and proteins, allowing for easy scripts for being created to retrieve details of desire. A similar API is utilised internally by the web interface to Screen the info. The experts connected to us are very certified and proficient in all of the domains. Our writers make sure to match the good quality standards and assist you with any tutorial process. Routinely rebuilding btree indexes usually does far more hurt than fantastic. Only take action for a purpose, and evaluate/Appraise whether or not your goal was accomplished from the action you took.
OPCFW_CODE
Configuring Multiple NICs on Windows 2008 Server for Network Backup I have a IBM x3650 M4 running Windows 2008 Server and it is used as the DNS Server and the File Server. I also have a Windows 7 machine on the same network configured as the "backup server". I am using UrBackup tool that schedules a backup everyday. The data is "pulled" by the "backup server" from the Windows 2008 server. Since the server has multiple NICs, i would like to use one of them as a dedicated link for backup of data. How can i configure the DNS entries / hosts file entries on the Windows 2008 server so that the dedicated NIC is used instead of the public NIC during backup? I know i will have to configure a new NIC on the "backup server" also but that one is easy. How do i force the backup script to use the dedicated link and not the public one? Thanks for your help. Regards, Anand You make sure your backup server can reach the file server via its IP on the separate NIC and then your backup scripts / settings must point to the IP of the new NIC. Easy. Thank you both for your replies. I will try this and let you know. Anand, you could try this by adding static route from your Windows Server to backup server. Let's assume your Windows server primary NIC is <IP_ADDRESS> and your secondary <IP_ADDRESS>; while your backup is <IP_ADDRESS>; Then you could add this static route: route add <IP_ADDRESS>/32 <IP_ADDRESS> metric 10 Low metric value would force this 1:1 route to be primary path for ethernet packets. Thanks for the responses. Although it is easy to configure the backup server to point to the new NIC of Windows server, how do i ensure that traffic is directed on the dedicated NICs? Ex: Windows 2008 Server Primary NIC - WinSrv1 (<IP_ADDRESS>) Secondary NIC - WinSrvBck (<IP_ADDRESS>) Backup Server (windows 7) Primary NIC - Srv1 (<IP_ADDRESS>) Secondary NIC - SrvBck (<IP_ADDRESS>) There is no setting in a backup tool that says that, "Push or Pull all traffic using the secondary NICs or the hostnames". Although i can tell the tool that the data is on "WinSrvBck", how to ensure that data travels from WinSrvBck only on SrvBck ? Is there something in Windows that can tell the OS to use SrvBack for backup data? Thanks, Anand It's quite simple: add a new network interface to the Win2008 server add a new, dedicated IP address to the new interface configure your Win7 machine to point at the new IP address assigned to the new server interface.
STACK_EXCHANGE
At ADD.xyz, our goal is to create a fully decentralised ecosystem from the grassroots, not top down. We understand that in our industry, projects and companies need to be aligned to their users more than ever. That is why we believe that we need to fully decentralise over the course of our development. A draft architecture and proposal of our future governance system will replace some aspects of centralised executive decision making by the project, enabling users to argue, discuss and vote on priority changes that you’d like to see within ADD.xyz. The ADD.xyz governance system Participation starts with the ADD.xyz governance system, $ADD. It’s a singular token across the entire platform that also works within our governance process. In addition to being a standard ERC-20 asset, $ADD allows the owner to delegate voting rights to the address of their choice; the owner’s wallet, another user, an application, or a DeFi expert. Anybody can participate in ADD.xyz governance by receiving delegation, without needing to own $ADD. Therefore with the risks of governance reduced through a delegation system, participants can weigh in on discussions by proposals without the risk of losing their token assets. We believe that a ERC-20 delegation system is a way forward, not just for ADD.xyz, but the entire DeFi space. ADD revolutionises community governance — it is not a fundraising device or purely a speculative opportunity, it’s a system representing capital value and democratic participation. Grassroots; not top-down decisions A simple governance framework has been proposed so that ADD token holders can easily participate in shaping the direction of ADD.xyz. This is how it works; Each $ADD token is counted as 1 vote. Anybody with 1% of the Total Supply (Approximately 1,200,000 ADD) of ADD delegated to their address can propose a governance plan; these are simple or complex sets of plan, such as adding support for a new currency/protocol, changing priorities on integrations, deciding pool splits from fees, or other functions that only executives can change. All proposals are subject to a 14 day voting period, and any address with voting power can vote for or against the proposal. If a majority, and at least 5,000,000 (4% of total supply) votes are cast for the proposal, it is queued in the Timelock, and can be pushed for implementation to the team. Towards Aggregated DeFi governance We need to do more to bring other DeFi projects to implement an open, transparent governance system like ours, we need an interconnected way of doing governance together. This is why ADD.xyz will also use any stake-holder allocated assets provided to ADD.xyz by protocols like Compound, Dydx, Aave and others, and allow ADD.xyz users to influence in other projects’ proposals using the ADD stake. Users can also delegate their voting rights to the ADD address in order to combine voting power and weighting. We’ll be rewarding users who participate in the testing of the ADD.xyz governance system; Governance Plans, Voting, Defeating Plans, Time Lock Contracts & Our unique ‘Solidarity voting actions’ over the next few months - Stay tuned for more!
OPCFW_CODE
I was trying out to get the phylogenetic tree for the MAGs generated from my metagenomic datasets. The command that I have used is : phylophlan --input PHYLO_IN/ -d phylophlan -t a --databases_folder PHYLO_DB/ --diversity high --output_folder PHYLO_OUT -f PHYLO_OUT/supermatrix_aa.cfg --genome_extension fasta and the final results I got are : PHYLO_IN.tre, PHYLO_IN_resolved.tre, PHYLO_IN_resolved.tre, RAxML_bestTree.PHYLO_IN_refined.tre, RAxML_info.PHYLO_IN_refined.tre, RAxML_info.PHYLO_IN_refined.tre, RAxML_log.PHYLO_IN_refined.tre, RAxML_result.PHYLO_IN_refined.tre What is the difference between these different tre files? I tried to visualize the tree in ITOL web tool, but that does not give any taxonomic information of the bins. how do I get that? You can find a description of the outputs in the documentation here. In brief, the RAxML_bestTree.PHYLO_IN_refined.tre should be your final phylogeny, in the above example. The taxonomic information is not something you’ll get out of the tree if you don’t have other genomes in it with a known taxonomic label assigned to them. What you can do alternatively is to run phylophlan_metagenomic that will provide you with the closest species-level genome bins (SGBs) to your input MAGs, so that you can understand whether your MAGs belong or not to an already existing SGB. Thanks for all the responses you made earlier to all the questions I had. Now I have a few more. I want to confirm if I am doing everything correctly so I am mentioning all the steps that I have done for making the tree: downloaded phylophlan config files phylophlan --input /lustre/rsharma/PHYLO_ANALYSIS/ALL/ -d phylophlan -t a --databases_folder /lustre/rsharma/PHYLO_ANALYSIS/PHYLO_DB/ --diversity high --output_folder /lustre/rsharma/PHYLO_ANALYSIS/OUTPUT/ -f /lustre/rsharma/PHYLO_ANALYSIS/OUTPUT/supermatrix_aa.cfg --genome_extension .fa --force_nucleotides --nproc 50 My MAGs are in .fa format but the config file that I am using is in “aa” format, is that alright or I need to change anything? when I tried to make database using the “nt” config file it was not making the database and giving some error so I tried with this one and it started to run. Can I use some other tool for obtaining taxonomy like GTDBTK and then use ITOL to label the phylogenetic tree with the species? Is this the correct way of visualizing the MAGs phylogeny and taxonomy as well? One more question is that if the above-mentioned way is correct then while making the phylogenetic tree what diversity level shall I choose (low, medium, or high)? I’ll report the pieces so that it will be clearer to which point I’m answering to. Instead of downloading config files, you can use PhyloPhlAn to generate what you need so to ensure that the tools as defined in your system are correctly matched. Your inputs can be both genomes and proteomes, so no problem at all with that. The aa (or nt) I put in the config file name, is a convenient way for me to identify config tailored for genes/nucleotides (nt) or proteins/amino acids (aa) databases. This because only with protein databases the translated search is available and can deal with both genomes and proteomes as input. Of course, you can use any other tool like those you mentioned. Alternatively, I can say that you can use phylophlan_metagenomic that will report the closest SGB found to your MAG and you can use this information to taxonomically characterize your MAGs. For visualization, we have GraPhlAn within bioBakery, which is very flexible but would require a bit of scripting to get colorful and annotated figures. In the example above you’re running --diversity high and --accurate (the default if not specified). You can find a bit more info about the available combinations here: Home · biobakery/phylophlan Wiki · GitHub. There are some cutoffs that differ between --accurate and --fast, but won’t be too dramatic probably in your case. I think the main difference is more on the expected diversity among the MAGs that you want to phylogenetically characterize. If you expect low genomic diversity, maybe medium and high will be too aggressive and might cut out a bit of the phylogenetic signal.
OPCFW_CODE
Office VBA Reference Excel VBA List of Worksheet Functions Available to Visual Basic Using Excel Worksheet Functions in Visual Basic. Custom functions, like macros, use the Visual Basic for Applications The worksheet in Figure 1 shows an order form that lists each item, the. The following list represents all of the worksheet functions that can be called using the WorkSheetFunction object. For more information on a particular function. VideoExcel VBA Basics #11 Create your Own Custom Functions with or without Arguments Excel visual basic functions list - wennA custom function must start with a Function statement and end with an End Function statement. Want to learn much more about VBA programming in Excel? Except for a few people who think Excel is a word processor, all Excel users incorporate worksheet functions in their formulas. Now you can copy the DISCOUNT formula to F Excel guru John Walkenbach agrees and, in Excel VBA Programming for Dummies , states that his personal preference is:. 2017 kommt: Excel visual basic functions list |PAYSAFECARD CODES KNACKEN||After reading this Excel tutorial, you have all the basic tools you need to start using Excel worksheet functions in your macros. Events, Worksheet Functions, ghost hunters season 1 Shapes. After you type an indented line, the Visual Basic Editor assumes your next line will be similarly indented. Microsoft Office Excel Inside Out By Mark Dodge and Craig Stinson. Then click the Go button. Use the If Then statement in Excel VBA to execute code lines if a specific condition is met. Therefore, my purpose with this guide is to cover all the aspects you need to understand to start using Excel worksheet functions in VBA .| |Excel visual basic functions list||Office UI Toolkit for add-ins and web apps. Office Add-ins Office Add-in Availability Office Add-ins Changelog Microsoft Graph API Office Connectors Office REST APIs SharePoint Add-ins Office UI Fabric Submit to the Office Store All Documentation. Create an HTML File with a Table of Contents based on Cell Data. Returns the position of a substring within a string, searching from tivoli aachen webcam to left. One such a case is the MOD worksheet function which, as explained above, is replaced within VBA by the Mod operator.| |GEORGIA STATE LOTTERY||This commission comes at no additional cost to you. Using ActiveX Controls on Sheets. Calculates the interest part of a payment, during a specific period, for a loan or investment. About Us Contact Us Testimonials Donate Follow us. If you would like a list of these functions sorted by category, click on the following button:. Figure 1 In column F, we want to calculate the discount for each item ordered. Texas holdem bilder The Reference To The Application.| |Excel visual basic functions list||Excel Resources Excel Shortcuts. The worksheet functions listed there are the ones you can call using the WorksheetFunction object I explain in this coming section:. How to Use Excel Range Names to Build Formulas. The following table provides a list of commonly used VBA statements that you might use when creating macros for Excel. As explained in VBA for Casino venlo Made Simple:. Finally, the following statement rounds the value assigned to the Discount variable to two decimal places:.|
OPCFW_CODE
Julia loops are as slow as R loops The code below in Julia and R is to show that the estimator of the population variance is a biased estimator, that is it depends on the sample size and no matter how many times we average over different observations, for small number of data points it is not equal to the variance of the population. It takes for Julia ~10 seconds to finish the two loops and R does it in ~7 seconds. If I leave the code inside the loops commented then the loops in R and Julia take the same time and if I only sum the iterators by s = s + i+ j Julia finishes in ~0.15s and R in ~0.5s. Is it that Julia loops are slow or R became fast? How can I improve the speed of the code below for Julia? Can the R code become faster? Julia: using Plots trials = 100000 sample_size = 10; sd = Array{Float64}(trials,sample_size-1) tic() for i = 2:sample_size for j = 1:trials res = randn(i) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end toc() sd2 = mean(sd,1) plot(sd2[1:end]) R: trials = 100000 sample_size = 10 sd = matrix(, nrow = trials, ncol = sample_size-1) start_time = Sys.time() for(i in 2:sample_size){ for(j in 1:trials){ res <- rnorm(n = i, mean = 0, sd = 1) sd[j,i-1] = (1/(i))*(sum(res*res))-(1/((i)*i))*(sum(res)*sum(res)) } } end_time = Sys.time() end_time - start_time sd2 = apply(sd,2,mean) plot(sqrt(sd2)) The plot in case anybody is curious!: One way I could achieve much higher speed is to use parallel loop which is ver easy to implement in Julia: using Plots trials = 100000 sample_size = 10; sd = SharedArray{Float64}(trials,sample_size-1) tic() @parallel for i = 2:sample_size for j = 1:trials res = randn(i) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end toc() sd2 = mean(sd,1) plot(sd2[1:end]) in R I would do: my.sd <- function(i) {; res <- rnorm(n = i, mean = 0, sd = 1); mean(res*res) - mean(res)^2; }; sd <- replicate(trials, sapply(2:sample_size, my.sd)) After you wrap this in a function, you will see that almost the entire time is spent inside randn, it has nothing to do with the speed of loops in Julia vs R. You are also writing sum(res)*sum(res) instead of sum(res)^2, and sum(res.^2) instead of sum(abs2, res), which are both wasting resources. You can rewrite to this: sd[j, i-1] = sum(abs2, res) / i - (sum(res) / i)^2. @DNF, all this is correct but additionally in Julia also loops themselves when executed in global scope are slower than loops executed in a function. Using global variables in Julia in general is slow and should give you speed comparable to R. You should wrap your code in a function to make it fast. Here is a timing from my laptop (I cut out only the relevant part): julia> function test() trials = 100000 sample_size = 10; sd = Array{Float64}(trials,sample_size-1) tic() for i = 2:sample_size for j = 1:trials res = randn(i) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end toc() end test (generic function with 1 method) julia> test() elapsed time: 0.243233887 seconds 0.243233887 Additionally in Julia if you use randn! instead of randn you can speed it up even more as you avoid reallocation of res vector (I am not doing other optimizations to the code as this optimization is distinct to Julia in comparison to R; all other possible speedups in this code would help Julia and R in a similar way): julia> function test2() trials = 100000 sample_size = 10; sd = Array{Float64}(trials,sample_size-1) tic() for i = 2:sample_size res = zeros(i) for j = 1:trials randn!(res) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end toc() end test2 (generic function with 1 method) julia> test2() elapsed time: 0.154881137 seconds 0.154881137 Finally it is better to use BenchmarkTools package to measure execution time in Julia. First tic and toc functions will be removed from Julia 0.7. Second - you mix compilation and execution time if you use them (when running test function twice you will see that the time is reduced on the second run as Julia does not spend time compiling functions). EDIT: You can keep trials, sample_size and sd as global variables but then you should prefix them with const. Then it is enough to wrap a loop in a function like this: const trials = 100000; const sample_size = 10; const sd = Array{Float64}(trials,sample_size-1); function f() for i = 2:sample_size for j = 1:trials res = randn(i) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end end tic() f() toc() Now for @parallel: First, you should use @sync before @parallel to make sure all works correctly (i.e. that all workers have finished before you move to the next instruction). To see why this is needed run the following code on a system with more than one worker: sd = SharedArray{Float64}(10^6); @parallel for i = 1:2 if i < 2 sd[i] = 1 else for j in 2:10^6 sd[j] = 1 end end end minimum(sd) # most probably prints 0.0 sleep(1) minimum(sd) # most probably prints 1.0 while this sd = SharedArray{Float64}(10^6); @sync @parallel for i = 1:2 if i < 2 sd[i] = 1 else for j in 2:10^6 sd[j] = 1 end end end minimum(sd) # always prints 1.0 Second, the speed improvement is due to @parallel macro not SharedArray. If you try your code on Julia with one worker it is also faster. The reason, in short, is that @parallel internally wraps your code inside a function. You can check it by using @macroexpand: julia> @macroexpand @sync @parallel for i = 2:sample_size for j = 1:trials res = randn(i) sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res)) end end quote # task.jl, line 301: (Base.sync_begin)() # task.jl, line 302: #19#v = (Base.Distributed.pfor)(begin # distributed\macros.jl, line 172: function (#20#R, #21#lo::Base.Distributed.Int, #22#hi::Base.Distributed.Int) # distributed\macros.jl, line 173: for i = #20#R[#21#lo:#22#hi] # distributed\macros.jl, line 174: begin # REPL[22], line 2: for j = 1:trials # REPL[22], line 3: res = randn(i) # REPL[22], line 4: sd[j, i - 1] = (1 / i) * sum(res .^ 2) - (1 / (i * i)) * (sum(res) * sum(res)) end end end end end, 2:sample_size) # task.jl, line 303: (Base.sync_end)() # task.jl, line 304: #19#v end Thank you for the answer. When I use a shared array to do parallel computing the code becomes fast too (0.1s). Does this have anything to do with global variable apart from the fact that I am using six cores? Is it possible to define global variables in a better way rather than put the code inside the function? Is there any references to explain why global variables make Julia faster? Are there any risks with using global variables? It is explained in the Julia Manual https://docs.julialang.org/en/latest/manual/performance-tips/#Avoid-global-variables-1. For SharedArray itself does not help here @parallel helps. I will improve my answer to cover this. The start of the answer makes it sound like global variables make Julia faster, while the opposite is true. Perhaps clarify that. Also, if you move randn! outside the innermost loop, to work on a matrix, it can be multithreaded. Finally, the expression in the innermost loop is suboptimal (e.g. sum(res)*sum(res).) Thanks. I have written that then Julia has speed of R (so slow) - but I will clarify :). I did not do other optimizations because MOON wanted to compare the same codes on Julia and R (and not asking how to make fastest possible Julia code). Thank you both! I learned a lot! If there are any points left (further optimization of Julia code etc. .) Please feel free to add to the answer! I have updated the answer - it is long now, as you have asked about many topics in one question. I hope all is clear.
STACK_EXCHANGE
I know for the fact that my local telco supplies China brand router to customers, and they are able to access my local network through the backdoor built in the router. I have physically witnessed such event taking place. I accessed the router menu, and the router has a customized firmware, and I see no option to disable remote access by the telco. While there are other telcos available, they are all using same China brand routers, so it seems like I would have to change the router to non-China brand to block unsolicited remote access. However, a good wireless router is as costly as a top end CPU here. And while researching, I came across the idea about DIY router using old desktop parts. Since I’m building a new desktop, I am wondering if I could integrate the function of a router into my new desktop, by merely adding some extra parts (while not exceeding the cost of a wireless router). I would like to know: I’ll be using this “desktop-router” for video remuxing, so I’m hoping I can use Manjaro as OS, instead of using those dedicated router OS. Can this be done? I only have 1-2 devices that need wired connection to “desktop-router”. And generally consumer grade motherboard has 1 network port. I presume I just need a network card that has 2 network ports for such setup? In the event that I need more wired connection, will adding a network switch downstream address my needs? Would the wireless antenna in consumer grade motherboard, suffice to act as wireless router? Or should I be concerned that the bluetooth connection in the antenna will interfere with wireless networking? What networking software would be needed / recommended for such setup? I would not do it in a rolling release distro. In fact, not even in a normal distro. If you really want to use PC for routing, there are special distributions, like Pfsense for example. A hypervisor and a couple of virtual machines - one for pfsense, one for manjaro, one for web or storage server, is the way to go if you want a hobby server room in your home. Probably. I wasn’t the networking guy when i worked in a ISP back then, i just listened what weird and less weird experiments they do…the chief of technical support had a whole rack server at home…for “exercise” purposes… Have not done it. I have only done openwrt stuff. If i had to do home routing the complicated way i would probably go Mikrotik. Or Pihole, or gl.inet, or raspberry pi with something. Something with smaller form factor. 1 Would not mix something as special as a router/firewall/gateway with a workstation. This will cause challenges and issues. 2 That or combine with a switch to have more ports 3 That should work fine 4 If the antenna/chip can function as an accespoint and have enough reception (not al can iirc) 5 Something designed and built to do that job, not a desktop os. See if there are some low cost devices that can do the task dedicated, if you don’t need 1 Gb/s + most hardware with 2 network cards will do in a pinch and there is something to choose. There are options like openwrt, pfsense etc that run on a variety of hardware. Sometimes on old hardware that is cheap to get at a flea-market / 2nd-life shop? (This will differ where you live I suppose, here lots of working hardware is thrown away, mostly because it is old, not because it does not work anymore) edit: I suppose what I wrote above is the longer version of the post zbe made. You assume that zbe does not care, but zbe does care and wants you to succeed and proposes you get the best solution for your stated problem, as Theo and I also propose. I think both @Teo and you have proposed a new approach by using Hyper-V. Same as Teo, I’m no networking guy, and I’m not familiar with Hyper-V as well. I’ll look for more info on this, and see if the Pfsense / OpenWRT would function as intended on Hyper-V. At the mean time, any suggestion and tip on how-to, or other alternative, are highly appreciated. Hyper-V is a technical solution that we do not propose if I re-read what was written. Using a bare metal hypervisor to run the router function along side some other os with another function is an option, look at proxmox as an example, I’m sure there are others. In combination with desktop functionality, this is hard or simply not possible. There are options for the situation: Use a desktop os with more then one networkcard & wifi adapter as a router Advice: do not do this with a desktop os. Use a dedicated piece of hardware and software for the router role Advice: This is the way to go Some ideas for the dedicated router option, there are more options then this. Get a ubiquity set of hardware and wifi acces point https://www.ui.com/ Cost: probably a lot, for the full stack 3 things are needed iirc. Will it do what you need: Yes, even looks pretty if you want your network gear to show itself. Openwrt solution https://openwrt.org/ Cost: probably less then a pfsense soltution depending on the hardware you can get Will it do what you need: Yes Cheap consumer router Cost: There are 50-75$ routers Will it do what you need: Probably some functionality might be missing, it might be not as resilient nor as stable or perform as well as the other options. It might be possible to run Openwrt or pfsense or some other open solution. A mainstream consumer router Cost: more then the cheap router Will it do what you need: Yes, It might be possible to run Openwrt or pfsense or some other open solution. Leftover pc parts Will it do what you need: Yes, and you will learn something doing it. First, my gratitude to @Hanzel for your time and efforts to gather these info, not to mention how organized it is presented. I think my original idea has been “veto” - primarily cuz it involves a desktop OS. I’m curious on the reason for “veto”, is it due to the complexity in setting up Pfsense / OpenWRT on Hyper-V, or the potential impact on network performance due to involvement of Hyper-V? As mentioned in my 1st post, I was thinking of adding “router” function to a new desktop project, without incurring cost more than a new router. So, with people / guru “veto” this idea, it would be wise for me to consider alternatives. In that respect, would flashing a consumer router with OpenWRT, provide better security that factory firmware? What I would do in your situation, is simply to use a typical home router; a Billion branded one, for example, that might often be provided by an ISP; or in any case, are generally more affordable. Configure the router with security in mind, and… done! If I was particularly paranoid about security on Linux, I might also configure a simple firewall on each connected machine - GUFW, for instance, and configure that as an additional layer of security. That is all. Cheers. To an earlier question: i have only done Openwrt on a classic routers (MIPS/Arm). Once again, i would advise against such a project for all practical purposes. If one wants more tinkering at home - there are routers with openwrt support of mikrotik for that. Using an essentially desktop grade machine with x86, even with the right software like pfsense on esxi or similar, is also not wise performancewise. And after buying decent switch, lan cards and wifi cards it is also more expensive. Such a project only makes remotely any sense for learning, like if you are going to make yourself acquainted with the basics of networking, learn CCNA or something. this depends on the existing hardware he has to create a diy-router. there are nice solutions using a thin-client or raspberry/raspberry-clone. the main focus should be the power-wattage of the diy-router-system. something up to 20 Watts is acceptable.it is a no-go if it’s an elderly pc where the power-supply is several 100 Watts . p.s.: such a project get’s interesting if the diy-router is combined with an diy-nas (that is very easy with an thin-client) and in combination with one or two hdd’s with a large capacity.
OPCFW_CODE
Is it possible to disable UEFI on Skylake motherboards? Is it possible to disable UEFI on Skylake motherboards? Maybe a better question is: is UEFI motherboard-specific, chip-specific or both? UEFI is the replacement to BIOS. What do you actually want to disable? EFI boot? @Jonno I don't want to use UEFI, I want to use an ordinary BIOS. I am willing to bet the legacy bios will be left in the dust starting with Skylake processors as are other items...http://wccftech.com/intel-skylake-remove-support-usb-based-windows-7-installation-platform-specs/ @TylerDurden - Well what you want isn't possible. What reason do you have for wanting to use BIOS which prevents you from using larger capacity HDDs instead of UEFI which does? Legacy BIOS has been dead for several years now (on mainstream computers). The motherboard manufacturer will have implemented either BIOS or UEFI, it is stored in a ROM chip on the motherboard and not tied to the chipset or CPU. There are no boards that I know of that have a choice, it will have one or the other, and there is no 'opt-out' method. This firmware controls all of your low level devices. BIOS is now outdated and being phased out. As such, you will likely struggle to find a board without UEFI that is compatible with Skylake processors. Edit: After a few comments, I think you're actually referring to disabling certain UEFI components to run in a legacy mode. The UEFI is still the underlying system, but there are certain legacy components you can enable. Using the manual from this board as an example: VGA Support Allows you to select which type of operating system to boot. Auto Enables legacy option ROM only. EFI Driver Enables EFI option ROM. (Default) CSM Support Enables or disables UEFI CSM (Compatibility Support Module) to support a legacy PC boot process. Enabled Enables UEFI CSM. (Default) Disabled Disables UEFI CSM and supports UEFI BIOS boot process only. Storage Boot Option Control Allows you to select whether to enable the UEFI or legacy option ROM for the storage device controller. Disabled Disables option ROM. Legacy Only Enables legacy option ROM only. (Default) UEFI Only Enables UEFI option ROM only. This item is configurable only when CSM Support is set to Enabled. There are components of UEFI you can run in a traditional BIOS way, but you are still using UEFI firmware. Although I am confused why you don't want to use UEFI, but that's not within the scope of the question. Some motherboards do have a choice between uefi and legacy bios. @Moab I did a little research before writing that and figured I'd add the qualifier 'that I know of' because I guessed there may be exceptions. Do you know of any? I'm curious how they'd work? @Moab I was aware of the choice between UEFI and Legacy BIOS on some boards but I thought that just meant MBR booting vs EFI booting, rather than actually changing the firmware to BIOS A couple of years ago I was buying a computer to use Linux and I had to be careful to buy a board that allowed the UEFI to be disabled because some kinds of Linux do not support it, but I thought things may have changed. @TylerDurden I've now modified my answer as I think I now understand what you want to achieve. Just to note - UEFI is quite broadly supported now, unless you're using a distro that hasn't had much attention recently you shouldn't have too much problem leaving everything in UEFI mode rather than legacy. @TylerDurden - Currently there are virtual no Linux distributions that are supported by the maintainers that don't support UEFI. If you are using such distribution, that does not, you should consider the security implications of doing so Ah, MBR, you are correct. Embedded Linux distributions often used legacy mode bios and grub scripts... disabling UEFI in BIOS is a LOT easier. I have many boards with "OS Select". That said, some new Intel Atom tablet BIOS is UEFI only which is a pain. With the exception of a few computers that implement EFI as a feature that's run from a BIOS (such as Gigabyte's abysmal "Hybrid EFI"), computers that support EFI-mode booting use EFI, not BIOS. Thus, there is no such thing as "switching off the EFI." What many computers do permit is booting BIOS-mode OSes via a feature called the Compatibility Support Module (CSM). This is an add-on feature that permits an EFI to run BIOS-mode boot loaders. It's logically similar to dosemu or WINE under Linux, which permit Linux to run DOS or Windows programs. Importantly, when you use a CSM, the computer is still running EFI, so you haven't really gotten rid of anything EFI-related; you've just pushed it out of the way. If you simply need to run an old EFI-unaware OS, this is probably fine. If you're philosophically opposed to EFI, this won't do any good. If you want to run Windows, Linux, or some other EFI-aware OS in BIOS mode, the CSM will do the job, but the question then becomes: Why do you want to boot in the old way? There are few or no practical advantages to booting an OS that supports both boot modes via a CSM, and doing so adds complication to the boot path, so doing it this way is likely to create new problems. Whether you like it or not, EFI is the future of computers, at least for the next few years. If you want a true old-school BIOS, you pretty much have to stick with an older computer. There is one possible workaround, though: You can use CoreBoot, which is an open source minimalistic firmware for some computers. CoreBoot is useful only when paired with one of several payloads, which are tools that rely on CoreBoot's basic hardware-initialization code. There is a payload that implements a BIOS, so you can use install CoreBoot plus its BIOS payload to get back to the old-fashioned way of working. (There's also a UEFI payload, if you want to go with something more modern without whatever stuff the computer manufacturer has added to its EFI.) CoreBoot itself is tiny (more like the hardware-initialization part of BIOS than like EFI), so using CoreBoot in this way is different from using an EFI plus its CSM. The trouble with CoreBoot is twofold. First, it's developed with a limited set of computers. It can be made to work with more, but if you don't want to take a gamble on it working (possibly bricking your computer if it fails), you must pick your computer from the limited list of supported models, many of which are older. I haven't checked, but I doubt if CoreBoot yet supports any Skylake boards, although it might in the future. Second, installing CoreBoot is a highly technical task; it's not a point-and-click operation like installing the average program. If an installation fails, the firmware may have to be repaired by physically removing the chip on which it's stored, so there's significant risk, particularly if you're not comfortable with such tasks. Between these two factors, you have to be pretty dedicated to use CoreBoot.
STACK_EXCHANGE
The Raspberry Pi single board computer is a remarkable piece of computing technology: about the size of a 2.5" solid state hard disk, the Pi 4B I'm using manages to pack in an ARM processor, 4GB RAM, gigabit Ethernet, twin USB 3 ports, twin USB 2 ports and twin HDMI outputs. It isn't the speediest computer in the world -but it runs entirely silently, which is something of a godsend in a music room! I used my Raspberry Pi as my principal music player for around four months: eventually its inability to keep connected to my USB-based DAC saw it consigned to the sock drawer... By design and default, a modern Raspberry Pi is generally kitted out with Raspberry Pi OS, which is an ARM-specific port of Debian with an LXDE front-end (though when I was running my Pi as my 'production' music player, I kitted it out with a KDE front-end: it worked well enough, despite the massive heft of that particular desktop environment. For my testing of Giocoso 3, I did a fresh net-boot and wrote a new copy of Raspberry Pi OS 5 (64-bit) onto a 32GB micro SD card, which after a reboot and a bit of initial setup sprang into life in just a matter of minutes. I then launched the Giocoso installer in the usual way.... see if you can spot an initial problem below: The many strange, accented letters in that screenshot tells you immediately that all is not well in this particular Pi-land! They're supposed to be line-drawing characters, making a neat and tidy box at the top of the screen, but if you can press on regardless, the Giocoso installation nevertheless completes successfully. Unfortunately, running the program results in something of a typographical disaster: I mean: it's certainly playing music! It's even displaying album art in a graphically-pleasing manner within the terminal... but it's remains an unholy mess because of the lack of any line-drawing capabilities! Fortunately, there's a simple enough fix: I don't know whether my system locale was set incorrectly because I missed something, or because I am a UK user with a US keyboard layout which confuses things, or what else it might be, but the fact remains that when I looked in the main menu -> Preferences -> Raspberry Pi Configuration -> Localisation tab, I saw nothing set for the Locale. In the above screenshot, you see me selecting English, United States, UTF-8 as the correct locale (I should probably have selected English, United Kingdom, UTF-8, but whatever...). The crucial thing is for UTF-8 to be selected in the third drop down box. Click [OK] to confirm that selection and you'll be prompted to reboot. On coming back up, re-run Giocoso once more and you'll see this: ...at which point, everything displays entirely correctly. Had I set the correct locale before installing Giocoso, even the installer would have been displayed correctly. Anyway, once you've sorted out the locale issue (if it affects you at all), Giocoso runs as a well-behaved program on what is an acceptable host. The only slight mood-dampener I could come up with is that re-drawing the Giocoso main program display as you switch between menus isn't as fast as I'd like it to be: there's a noticeable 'lag' and obvious line-drawing going on. It is perhaps to be expected of a computer with the Raspberry Pi 4's CPU and GPU resources. I don't own a Raspberry Pi 5, but I would imagine it would draw and appear altogether more swiftly on that much-improved hardware platform.
OPCFW_CODE
@DBCountMan Yes if you are using the location plugin. Using the location plugin you assign the storage node to a location and then when you register the target computer with FOG you assign the target computer to a location. Then when the computer pxe boots it learns who it needs to communicate with. So a storage node would work in this case to spread the load. One caveat with storage nodes is that they can’t capture images. Only the master node can capture images. Then all storage nodes (including the master node) in the same storage group can deploy the image. The answer is difficult to explain in just a few words. The fog.download file is actually in the FOS Linux image that gets transferred to the target computer before imaging begins. The fog developers provided two call out functions where the fog admin can do things before imaging starts and just after imaging stops but before the target computer reboots. You will use the first call out that the developers give you called a post init script. This call out script is called just after FOS Linux (the OS that runs on the target computer boots). My idea is to use this first call out to copy the patched fog.download script from the fog server to FOS linux, then when imaging starts it will use your patched file with the data you need to see. I would place the file to copy in the same directory as where the post init scripts are called from. NOW Sebastian mentioned an easier way to go about this without patching the fog.download file. In that you will again use the post init script call out script to simply print the info you need then pause waiting for a key press before continuing. There is nothing to patch, you will simply create a bash script in the post init directory and then call it from the post init callout script. Thinking outside the box, actually you have the power of php engine on the fog server. If one stretches their imagination a bit, this same script that displays your information on the screen, you should use php mail and send the contents of the fields in an email to you. This is a little harder to setup, but its not that hard to do. @alexamore90 Updating FOG is a normal process of fog administration. I agree its been quite a while since the last update but the process is the same. FOG 1.5.10 should be released soon, but until then the FOG developers are suggesting that people upgrade to a pre release but stable version of the development release. When you installed fog if you use the git method to download the installer files then upgrading is easy and you will not lose any configuration settings. To do the upgrade its pretty easy, to perform a dev-branch upgrade there is one more step. In your case if you are on FOG version 1.5.9 and you use the git method you would simply change to the base of the install directory (typically in /root/fogproject) and issue these commands on the fog server. git checkout dev-branch The git checkout dev-branch command is what changes the installer to use the 22.214.171.124 (or later code base). When 1.5.10 is released you would simply replace that line with to go to the master code base with git checkout master and issue the same commands. The installer installfog.sh will look at all of the answers you provided when fog was installed and use them during the reinstall. After you update to the 126.96.36.199 dev release you will need to once again update the FOS Linux kernel to 5.15.x series (FOG WebUI->FOG Configuraiton->Kernel update, as well as recompile the latest version of iPXE using this tutorial https://forums.fogproject.org/topic/15826/updating-compiling-the-latest-version-of-ipxe when you complete these after steps you will be at a level that FOG 1.5.10 will be when its released. The above install process will make sure your FOG install will support the newest hardware released by the hardware manufacturers. You say that you have 3 FOG servers you really should update them all to be on the same release or you won’t have the fix for the non-movable recovery partition that Microsoft created. @alexamore90 Sorry I don’t use that application. If you are more comfortable to post in your native language you may do that. We try to keep the posts in english, but find sometimes we get better detail when the poster uses their native language and we translate. I will only post in english through. If you have a basic drawing of your setup that might help too, Remember we have to imagine your configuration based on the words you use. One additional question I should have asked, is your current configuration for a home lab or a business? @george1421 The exsi is installed on a host server directly on the HDD, inside there are 3 fog virtual machines that I need to deploy windows on various clients. since I have another host (server) I would like to use it as a backup, but how can I make a backup on the host I have now and restore it on the new one without reconfiguring all over again?
OPCFW_CODE
I would like the calendar to display the price using the data from the Excel document. This is part of the fixed price , in order to cover all the latest tables. Currently I am using PHP Excel, to migrate to MySQL if possible. MySQL 5.5 PHP 5.2 Basically: I want to display the captured data in XLS (Excel) to MySQL in calendar form. PS. Objective - take data provided to you and research it using real estate softwares - Input the information in an excel spreadsheet and cross match the information for skip tracing. Requirements -Must be experienced in using Microsoft Excel -Must be good with doing extensive research -must have high speed internet -must speak fluent English Hi I need someone genuinely curious and engaged in the work because this could be a long term relationship. You need strong attention to detail, always testing the finished versions before presenting them for review. In your bid, tell me about your motivation in doing this work. Thanks I am looking for web and LinkedIn research expert to collect 700-800 contact data. I will give you industry and location name, you have to find the following data. company name website title email address phone If you are fit for this job, please place your bid and I will need sample to check your expertise. Note: I will start the project and make milestone when I confirm you can do it perfectly... ...transitioning from an existing medical conditions lists to the International Statistical Classification of Diseases and Related Health Problems (ICD10) medical classification list. We require a qualified professional with a strong background in medical terminology to map medical condition combinations from our Product Administration System (GLSCI) existing I need Predictive Search and Price of the hotel on calender in Travel Theme. Here is theme link for your reference. [ログインしてURLを表示] Thanks I have a Beanstalk app who need to connect to a RDS and a AWS Elasticsearch. To do this actually I’m using a IP list based Policy. Because Beanstalk IP Application changes, now I need help to configure a new policy to allow IP list and also an IAM attached to the Beanstalk app. I will need a detailed document with instructions step by step in order From the PDF provided, enter information for each company into this spreadsheet: [ログインしてURLを表示] The information required is listed at the top of each column on the spreadsheet: company name, website, and number of employees from each company's LinkedIn account. Hi there if you are able to find the URL for this link im not able to find the URL for this link when i click right click ( check the screenshoot below ) maybe with some coding experience you have , you might be able to get the link here is the link [ログインしてURLを表示] ...Organization About your organization – General Trading Company, a commercial organization that has started its activity a few months ago. The aim of the project is to build a simple corporate website without any complex content – images + text only, arranged in an elegant, clean style. This website is to serve as a representation medium to the public and You need a telegram to get awarded. My bot will post you messages with image preview and buttons with prices. There are 4 buttons. You need to choose price for every image. The price depends on complexity of the image. Please respond with your time zone and your working hours. If you don't use telegram don't respond. 3d modelling available with hight quality render I am looking for a simple bot that will login to my website and will click on a button on a previous defined URL of my website. That's all it should do. The requirement is it should be able to use proxies and different email accounts. @freelancer this bot is only for my own websites to search for dead subpages Hi, we would like to have assistance with growing our presence for a fashion brand on Instagram. This is concerning: 1/ Sponsored Ads - List Set-Up, Lead Generation, Conversion 2/ Organic Reach - Maximise all FREE elements on Instagram to create SALES need to be able to work with different chinese font. I need the words 禧诚 aligned vertically. Crimson red words (c...to be able to work with different chinese font. I need the words 禧诚 aligned vertically. Crimson red words (check harvard crest colour) and maybe some ivy plant at the side. Simple job. For a company related to education. Need it now.
OPCFW_CODE
How It Works At a high level, elmenv intercepts Elm commands using shim executables injected into your PATH, determines which Elm version has been specified by your application, and passes your commands along to the correct Elm installation. elmenv works by inserting a directory of shims at the front of your Through a process called rehashing, elmenv maintains shims in that directory to match every Elm command across every installed version and so on. Shims are lightweight executables that simply pass your command along to elmenv. So with elmenv installed, when you run, say, operating system will do the following: - Search your PATHfor an executable file named - Find the elmenv shim named elmat the beginning of your - Run the shim named elm, which in turn passes the command along to elmenv Choosing the Elm Version When you execute a shim, elmenv determines which Elm version to use by reading it from the following sources, in this order: ELMENV_VERSIONenvironment variable, if specified. You can use the elmenv shellcommand to set this environment variable in your current shell session. .elm-versionfile found by searching the directory of the script you are executing and each of its parent directories until reaching the root of your filesystem. .elm-versionfile found by searching the current working directory and each of its parent directories until reaching the root of your filesystem. You can modify the .elm-versionfile in the current working directory with the ~/.elmenv/versionfile. You can modify this file using the elmenv globalcommand. If the global version file is not present, elmenv assumes you want to use the "system" Elm—i.e. whatever version would be run if elmenv weren't in your path. This will get you going with the latest version of elmenv and make it easy to fork and contribute any changes back upstream. Check out elmenv into $ git clone --recurse-submodules https://github.com/sonnym/elmenv.git ~/.elmenv $PATHfor access to the $ echo 'export PATH="$HOME/.elmenv/bin:$PATH"' >> ~/.bash_profile Ubuntu Desktop note: Modify your Zsh note: Modify your ~/.zshrcfile instead of elmenv initto your shell to enable shims and autocompletion. $ echo 'eval "$(elmenv init -)"' >> ~/.bash_profile Same as in previous step, use ~/.bashrcon Ubuntu, or Restart your shell so that PATH changes take effect. (Opening a new terminal tab will usually do it.) Now check if elmenv was set up: $ type elmenv #=> "elmenv is a function" If you've installed elmenv manually using git, you can upgrade your installation to the cutting-edge version at any time. $ cd ~/.elmenv $ git pull $ git submodule foreach $(git submodule update --init --recursive) To install a Elm version for use with elmenv, run elmenv install with the exact name of the version you want to install. For example, elmenv install 0.14.1 Elm versions will be installed into a directory of the same name under ~/.elmenv/versions. It is also possible install the master branch of elm. The installation is directly from source, so it is necessary to have an up to date haskell and cabal setup.
OPCFW_CODE
E Bike speeding (>32km/hr) Penalty? In Toronto, Ontario, Canada, what is the penalty for riding an electric assist bike thatsurpasses 32 kilometres per hour on flat terrain? Also, what is the penalty for an e bike that surpasses the 500 Watt maximum? I'm looking for sources and public records if possible. 32km or 32km/h - there's a big difference. @mattnz edited in a link showing that it's speed not range In New Zealand, which I think would be similar, you are charged with operating an unregistered and unwarranted motor vehicle. @mattnz you mean like this one in Oz at $500 a time? Or this Blenheim guy who appears to have really dug himself in. Can you register it as an electric motorbike and just ride it? Yes it will cost more, but you'll be legal, and get to fit a license plate. @Criggie even if there's a moped class you'll lose access to bike infrastructure. You're likely to have to meet requirements for lights, helmet, brakes, licence, tax,... Some may be tricky. @ChrisH yes there are a number of followon questions, but if OP wants to ride fast on an electrified bike, there are options, which have costs. I can average about that speed most days on a non-electric bike! @Criggie I can't average that fast but then I ride flat bars and never get a decent run. Here in the UK the e-bike limit is only ~2/3 that, so on my wife's electric stepthrough with cruiser bars and too small for me I've had the assist cut out abruptly. There is a common misconception regarding E-Bike speeds. My E-Bike supports to 25-27 km/h (there is a linear degradation within that range). However, it can still go faster - with my own bio power, that is. I once did 50+ km/h going downhill and applying a lot of my own strength. The law says Q3: Can I modify my e-bike so it can go faster than 32 km/h? No. Modifying your e-bike to increase its speed beyond 32 km/h will no longer qualify it as an e-bike. Which means that it's no longer a bicycle, it is a moped, scooter, or motorbike. Note that the bike can go faster, you just can't have a motor that operates while the bike is going faster. It's an interesting thing to police. The exact penalties that will be applied depend on the discretion of the law enforcement process you experience. A Police Officer might just ticket you for, say, "1. Drive motor vehicle, no permit" which is $85. Or they might be persuaded that you should attend court and explain your behaviour to a magistrate, in which case Dealing with vehicle not conforming to standard Which is "no set fine" offence and you could face quite a lot of excitement, starting with "we will give your bike only to an approved mechanic who has agreed to modify it so that it is in conformance with the standard, and who will release it to you only once it does". If you are exceptionally lucky they will agree that turning it back into an e-bike is acceptable, but they could easily say "must be a conformant motorbike", and good luck with that. If the legal system wants to make your life unpleasant, they can. The trick is not to make any of them want to do that. This "getting away with riding an illegal e-bike" is only partly humorous, there is good advice. Obey traffic laws. Do not do tricks. Do not ride like a jackass. Avoid congested roads. Do not endanger others when you ride. Viz, pick the law you want to break (fast e-bike), and obey the rest.
STACK_EXCHANGE
M: How to contribute to open source without being a programming rock star - petdance http://www.softwarequalityconnection.com/2012/03/14-ways-to-contribute-to-open-source-without-being-a-programming-genius-or-a-rock-star/ R: marijn > Projects need contributions from everyone of all skills and levels of > expertise. I disagree. I find that politely rejecting or rewriting contributions from people with little experience or clue is a serious time drain for a maintainer (as well as a trial of patience). This goes even for documentation, which, when written by people without a clear view of the software, tends to be confused and misleading. I try to handle such contributions gracefully anyway, with the idea that some of these people will use the feedback to grow into better programmers, but even that is unlikely to benefit the project -- the time span it takes to go from clueless to good is probably longer than the period in which the person is involved in my project. R: petdance Rejecting code and docs is a fine strategy for the short term. In the long term, people and enthusiasm are the scarce resources. Taking the time to cultivate them as members of the community pays off down the road. I don't agree with your assumption that "the time span it takes to go from clueless to good is probably longer than the period in which the person is involved in [the] project." If I can spend some time shepherding the new person, who is probably more naive than unqualified, it will pay off in spades later. R: toyg The problem with all this stuff is always the same: it's incredibly, utterly, mind-numbingly boring, which is why even rockstar developers (who are the ones who will get all the credit for the project anyway, and who have the famous sky-high productivity that Spolsky measures in multipliers) can't be bothered to do it. And I say that after having done some of it here and there. R: wmat Similar to 'Improve the website' is to contribute to the project wiki. Many open source projects use wikis as a major part of their community documentation and are only useful when updated. Most wiki engines, such as MediaWiki even provide functions to list 'wanted pages' or similar. R: DanBC I'm pleased that they mention documentation. They don't mention translation (and internationalisation) or accessibility. {meta} the guidelines ask to avoid "14 ways to" style headlines, and to have instead "Some ways to" or even just "Ways to". R: petdance There were about 30 items I left out for the sake of article length. I'm going to be expanding all of them into a website that will have a page for each of them. R: gaius I wasn't aware that open source software came with a musical accompaniment.
HACKER_NEWS
using System; using System.Threading.Tasks; using WebSharpJs.NodeJS; using WebSharpJs.Script; namespace WebSharpJs.Electron { public enum ClipboardType { None, Selection } [ScriptableType] public class ClipboardData { [ScriptableMember(ScriptAlias = "text")] public string Text { get; set; } [ScriptableMember(ScriptAlias = "html")] public string HTML { get; set; } [ScriptableMember(ScriptAlias = "image")] public object Image { get; set; } [ScriptableMember(ScriptAlias = "rtf")] public string RTF { get; set; } [ScriptableMember(ScriptAlias = "bookmark")] public string Bookmark { get; set; } } [ScriptableType] public class Bookmark { [ScriptableMember(ScriptAlias = "title")] public string Title { get; set; } [ScriptableMember(ScriptAlias = "url")] public string URL { get; set; } } public class Clipboard : NodeJsObject { protected override string ScriptProxy => @"clipboard; "; protected override string Requires => @"const {clipboard} = require('electron');"; static Clipboard proxy; public static async Task<Clipboard> Instance() { if (proxy == null) { proxy = new Clipboard(); await proxy.Initialize(); } return proxy; } protected Clipboard() : base() { } protected Clipboard(object scriptObject) : base(scriptObject) { } public Clipboard(ScriptObjectProxy scriptObject) : base(scriptObject) { } public static explicit operator Clipboard(ScriptObjectProxy sop) { return new Clipboard(sop); } public async Task<string> ReadText(ClipboardType type = ClipboardType.None) { return await Invoke<string>("readText", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteText(string text, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeText", text, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<string> ReadHTML(ClipboardType type = ClipboardType.None) { return await Invoke<string>("readHTML", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteHTML(string markup, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeHTML", markup, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<string> ReadRTF(ClipboardType type = ClipboardType.None) { return await Invoke<string>("readRTF", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteRTF(string markup, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeRTF", markup, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<Bookmark> ReadBookmark(ClipboardType type = ClipboardType.None) { return await Invoke<Bookmark>("readBookmark", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteImage(NativeImage image, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeImage", image, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<NativeImage> ReadImage(ClipboardType type = ClipboardType.None) { return await Invoke<NativeImage>("readImage", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteBookmark(string title, string url, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeBookmark", title, url, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<string> ReadFindText(ClipboardType type = ClipboardType.None) { return await Invoke<string>("readFindText", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> WriteFindText(string text, ClipboardType type = ClipboardType.None) { return await Invoke<object>("writeFindText", text, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> Clear(ClipboardType type = ClipboardType.None) { return await Invoke<object>("clear", type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<string[]> AvailableFormats(ClipboardType type = ClipboardType.None) { var result = await Invoke<object[]>("availableFormats", type == ClipboardType.Selection ? "selection" : string.Empty); return (result == null ? new string[] { } : Array.ConvertAll(result, item => item.ToString())); } public async Task<bool> Has(string format, ClipboardType type = ClipboardType.None) { return await Invoke<bool>("has", format, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<string> Read(string format, ClipboardType type = ClipboardType.None) { return await Invoke<string>("read", format, type == ClipboardType.Selection ? "selection" : string.Empty); } public async Task<object> Write(ClipboardData data, ClipboardType type = ClipboardType.None) { return await Invoke<object>("write", data, type == ClipboardType.Selection ? "selection" : string.Empty); } } }
STACK_EDU
This is a very nice letter and I always welcome letters when someone is concerned about something. Let me reply below in bold. emotional tone - praising, accepting. deferring to you ("let me"). i'm getting rid of the rest of her comments to focus on the tone in what you said, but i just want to point out that she's kind of maintained this friendly emotional tone throughout, even though she is firm about not bending to your requests. >>> TG 12/24/10 2:34 PM >>> With all do typo :[ respect I very much disagree with your decision to give me a B in your class. I am not going to attempt to go into some sob story about what this semester was like for me reads: this was a very hard semester but I would like to highlight a couple of key points. sounds haughty... as if you're teaching. "key points" as if you have the answers to a problem. I feel that I took the time to respond respectfully to all of your questions for every assignment submitted. I did not simply fly kind of emotional - "simply fly". reads as disdain towards implied others who have simply flown. my way through this class, but took the time and effort to come up with what I believed were the most correct answers possible. Your decision to grade my post on the Communicating in Relationships (pertaining to culture) assignment as a 17/35 was surprising to me. I understand this was one of the only assignments where I made personal references to the questions asked, but I felt the examples were relevant for the task given. We were actually asked to use personal references. it sounds like you are blaming her for not following her own grading standards, and for not adhering to her word. That post was also submitted on time, so I don't see lateness as being a factor, and simply do not understand the grade. reads: "i did what i was supposed to. i answered with effort and correctly. i was not late. i don't get it." and that's fine, but it's... well, the way you state this is as if your grades are up for debate. The second was the Mid-Term exam which I explained to you was due to my financial hardship. I simply could not afford the book reads: it's not my problem i didn't have the book but I still showed up and thought thoroughly through all of the questions asked. From what I understand, I managed to at least average the class score, while others had the advantage of using the text. "advantage" implying that they had something you deserved but didn't have... that the grounds weren't fair... except, it was your responsibility to get the book. so it comes off as further implication that you deserve the highest grade despite not fulfilling your responsibility to get the book. The third point was "was" seems like an odd phrasing to me here. it's like you've already gone through and made this case and decided these points - you're not really aligning yourself with the prof in asking her to reconsider, but instead arguing against her. that the only assignment I did not submit was the Non-Verbal assignment. I tried to explain to you accusatory, somewhat desperate that this assignment was not solely about my own ability to complete it. It required a partner, and time taken out of business hours. First of all sounds snarky/with attitude , I don't have friends I can call favors on. reads: this assignment was a hardship and more than most normal people can handle Most of them are very busy, and I myself, work full time, am a single mother of a four year old boy, and outside of campus class time, usually work on my assignments after 10pm at night. reads: "you're expecting me to do unreasonable things for the course that i don't have time or the ability for"... essentially your professor asked unreasonable things that you just don't have time for because your life is real and hard. more and more this letter is sounding like you expect the course/prof to meet your requirements, instead of vice versa. While most people would have tried to bull harsh word their way through the assignment, and some told me they did, I respect my writing, assignments, and professors enough to not submit dishonest work. Had the assignment solely relied on my ability to complete it, the issue would have been different. That shows good character. Classes will always have cheaters. They lose in the long run though. "i am moral, therefore i deserve a better grade. implications that other students are not so moral and yet the prof grades them better - fault of the professor." Aside from these three points, I do feel that I attended the class with an above average level of dedication. I took time to learn the material, and produce my best work possible. Aside from my son needing me or being sick, I was in class, and volunteered for almost every class related project. This was due to genuine interest in the class content, and the desire to incorporate the material into my real world experience. i did a good job, except when it was hard to. This is the first e-mail I have ever written in regards to my grades, and honestly feel (especially due to the level of work submitted by others in class) that this grade was unjustified. i got a B+ despite not having the textbook, and not always showing up to class, but i don't think it's fair because i put in as much effort as i thought i reasonably could. I ask you to respectfully re-review my assignments, and your decisions toward the value they were graded upon. which... she did, as much as it was reasonably convenient to her to do so, much as you attended her class as much as it was reasonably convenient for you to do so.
OPCFW_CODE
package algogo import ( "testing" ) type permutationFunc func(a []int) bool func testPermutation(t *testing.T, permFunc permutationFunc, name string, in []int) { out := Copied(in) outs := [][]int{Copied(in)} nperm := 1 for permFunc(out) { outs = append(outs, Copied(out)) nperm++ } // check number of permutations generated nperm_e := Factorial(len(in)) if nperm != nperm_e { t.Errorf("%s(%v) generated %d permutations, expected %d", name, in, nperm, nperm_e) } // check whether the final state of the modified array is consistent if Different(out, in) { t.Errorf("%s(%v) generated %v on the final iteration, expected %v", name, in, out, in ) } // check whether all permutations are unique for i := 0; i < len(outs); i++ { for j := 0; j < len(outs); j++ { if i != j && Same(outs[i], outs[j]) { t.Errorf("%s(%v) generated duplicate permutations: %v, %v", name, in, outs[i], outs[j]) } } } } func TestNextPermutation(t *testing.T) { in := []int{1, 2, 3, 4} testPermutation(t, NextPermutation, "NextPermutation", in) } func TestPrevPermutation(t *testing.T) { in := []int{4, 3, 2, 1} testPermutation(t, PrevPermutation, "PrevPermutation", in) } func TestNextPrevPermutation(t *testing.T) { // NextPermutation and PrevPermutation must be inverse functions in := []int{4, 2, 9, 3} nperm := Factorial(len(in)) for i := 0; i < nperm; i++ { out := Copied(in) NextPermutation(out) PrevPermutation(out) if Different(in, out) { t.Errorf("PrevPermutation(%v); NextPermutation(%v) => %v, expected %v", in, in, out, in) } NextPermutation(in) } for i := 0; i < nperm; i++ { out := Copied(in) PrevPermutation(out) NextPermutation(out) if Different(in, out) { t.Errorf("NextPermutation(%v); PrevPermutation(%v) => %v, expected %v", in, in, out, in) } PrevPermutation(in) } }
STACK_EDU
Deevyfiction 《Nanomancer Reborn – I’ve Become A Snow Girl?》 – Chapter 688 Maria apathetic preach -p3 the pacha of many tales Novel–Nanomancer Reborn – I’ve Become A Snow Girl?–Nanomancer Reborn – I’ve Become A Snow Girl? adventures of the ancient genius jimmy neutron Chapter 688 Maria industrious trust Keeping in mind his power and rate, she needed a deep air. “Ah thank you very much in addition.” s.h.i.+ro smiled. “You can too buy the sword and dagger?” She glanced backside. Despite her delicate physical appearance, her sudden surge of rage provided her 2nd thoughts about drawing near casually. Going into an offensive posture, she ‘lunged’ to the Minotaur. Entering into an offensive posture, she ‘lunged’ on the Minotaur. “I apologise concerning this unsightly present. How to assist you?” Maria smiled sweetly as s.h.i.+ro nodded her top of your head following a quick pause. “Ok I do believe I got it. Your dealing with design is very reactive and you like to use the offensive. You stay clear of or relatively, redirect blows when you are able. A sword you may need isn’t some thing that can assist you stop but a little something to help you parry. With your technique, a thin sword is useful also when you want episode potential instead of defence ability. Ooo…. But a skinny sword might struggle to parry nicely to suit your needs. This is going to be a difficult sword to generate. Nevertheless! The potential narrative this sword can create in your hands is worth it.” Maria explained by using a really serious manifestation which slowly become a grin. Dragging s.h.i.+ro over to a office chair, Maria started to dig through her devices. Exploring for a lot of signs and symptoms of a labyrinth, she was stunned she didn’t even need to try out that tough since there was a little something s.h.i.+mmering in the long distance. With the knowledge that was the manifestation of the Minotaur seeking to construct a labyrinth, s.h.i.+ro flew towards it as fast as she could. Flicking her hand, she slashed against his upper body when he swung lower back with the club. There had been just individual range of tooth enamel around his oral cavity but his mouth was actually a horrifying lump of flesh which searched as it could fork out towards many information. If s.h.i.+ro were required to illustrate it easily, she’ll illustrate it as being a disgusting merge of various tongues. He didn’t have plenty of frizzy hair but there were clearly some tuffs that might be observed around his system. Devoid of the locks covering up his physique, the deformities ended up shown in basic view. Seeing that she had the strength of air travel, she could utilize the air tunnels to enhance her speed more. Hurting some wild birds for a dish at nighttime, she ate her fill and slept within the foliage. Obtaining near to the edge where the five air tunnels were actually, s.h.i.+ro glanced on the woodland that manifested her entrance to your Section of Existence. There was only a one set of tooth enamel around his mouth but his mouth became a horrifying lump of flesh which appeared enjoy it could fork out towards a number of recommendations. If s.h.i.+ro had to summarize it rapidly, she’ll identify it as being a disgusting blend of numerous tongues. “Tras.h.!.+” She shouted out and snapped the sword by 50 percent. Nevertheless, Maria possessed already seen s.h.i.+ro plus it was past too far. Regardless of her vulnerable overall look, her immediate surge of rage provided her next thoughts about getting close to casually. “Alright I feel I bought it. Your combating design is quite reactive and you want to consider the offensive. You prevent or somewhat, redirect blows when you are able. A sword you require isn’t anything which can help you hinder but anything which can help you parry. With the method, a thin sword is nice on top of that because you want infiltration ability rather than defence ability. Ooo…. But a slim sword might struggle to parry perfectly for you personally. This will be a difficult sword to make. On the other hand! The possibility story this sword can cause in your hands is worth it.” Maria reported with a really serious phrase which slowly converted into a grin. Trembling her travel, she made a decision to remainder up at the boundary to the nights since she had been going for the entire moment. Monitoring her situation throughout the emotional guide that she possessed designed, s.h.i.+ro eventually observed her former place. Stick to recent novels on lightnovelpub[.]com Immediately after she left behind this town gates, she unfurled her wings and began to take flight towards boundaries. “Ohhhh. Acceptable occur are available, sit down!” Maria nodded her brain enthusiastically as it reminded s.h.i.+ro of the Silvia was like when she brought up fantastic curing products. He didn’t have lots of head of hair but there are some tuffs that could be noticed around his human body. Without worrying about your hair dealing with his body, the deformities were shown in simple vision. Even with her vulnerable visual appearance, her quick spike of rage offered her following thoughts about drawing near casually. The fact is that, on the way in this article, she discovered the area which was become a floating island. The single thing lifestyle in the area was monsters and human corpses could possibly be viewed everywhere. Some have been 50 % ingested while some have been partial. There have been even some corpses underneath the isle as s.h.i.+ro guessed that they must have either jumped away or obtained chucked away. Studying the backside, one could see his spinal cord in very clear element mainly because it caused his tail. After all this, several of the skin area has been cut apart but no blood may be noticed. The one thing which may be noticed was the chilling white colored which manifested the bone tissue. With goat like hooves as thighs and extended sharp fleshless claws, the demonic monstrosity went towards Minotaur with ominous intention. “So can you purchase the sword and dagger?” She glanced lower back. The beast had been a weird amalgamation of numerous monsters and also it was similar to a chimaera however, not a similar.
OPCFW_CODE
I am not actively taking on new students at this time, but if you are interested in particle physics, cosmology, or astrophysics please check out what some of my fantastic colleagues are up to! Until 2021, I was a Pappalardo Fellow and NASA Einstein Fellow in the MIT Department of Physics. I received my PhD in physics from UC Berkeley in 2019 with the support of fellowships from the Hertz Foundation and the National Science Foundation. My dissertation, "Searching for the invisible: how dark forces shape our Universe" was supervised by Hitoshi Murayama and won the American Physical Society Sakurai Dissertation Award. I received my Bachelors from MIT in 2014 with a thesis jointly supervised by David Kaiser and Tracy Slatyer. In my spare time, I love making and eating all kinds of food, savoury and sweet, from a range of cuisines. Before becoming a McGill Space Institute Fellow, I was a PhD student working with Felix Kahlhoefer at the RWTH Aachen University in Germany. I defended my dissertation with the titled “Doors to Darkness” on the phenomenology of dark matter portal interactions in September of 2021. I am interested in the intersection between particle physics and cosmology in the context of modelling and probing dark matter. I want to understand how theory and experiment can be used in tandem to navigate and transform the expansive terrain of dark matter physics. When I’m not pondering the mysteries of the universe (and sometimes even when I am), I like to tell stories, fictional and otherwise, and create art of dubious quality. Before starting my PhD studies at McGill in 2021, I did my Bachelors and Masters from the Indian Institute of Science. Broadly, my research interest is theoretical physics with a focus on astrophysics and cosmology. I am particularly interested in using theoretical models in conjunction with observational data from cosmology to study mysterious things of the universe like dark matter. I joined the group as a PhD student in 2022 and am jointly supervised by Prof. Evan McDonough, Prof. Robert Brandenberger, and Prof. Katelin Schutz. I am interested in the intersection of particle physics and cosmology theory. Lately, I have been thinking about potentially distinguishing features of ultra-light dark matter models. Before coming to McGill, I completed my Master's at Brown University. Before that, I was an undergrad at the University of Michigan. When not doing science, I enjoy dabbling in visual arts, playing video games, and watching B-movies. Originally from outskirts of Mumbai, I did my B.Sc. Physics at Leipzig University and am currently in the second year of my M.Sc. Physics at the Heidelberg University. I joined the group as a graduate research trainee in 2021 working towards parts of my Masters thesis. My interest in dark matter arises from my interest in Early Universe Cosmology and Astroparticle Physics. I love teaching, both physics and math, and coming up with different visual/intuitive techniques for challenging concepts. Apart from that, I enjoy playing/watching Football, cooking/eating good food and occasional gaming! I joined McGill as a graduate student in 2022 doing my M.Sc. in physics under the supervision of Prof. Katelin Schutz and Prof. Oscar Hernandez. I have lived in Montréal my whole life, and French is my first language. After a brief career as a hospital pharmacist, I completed a B.Sc. in physics and computer science at Université de Montréal in 2022. During my B.Sc., I have done research on exactly solvable models with Prof. Luc Vinet. My broad research interests are theoretical physics and cosmology. I am interested in astrophysics, cosmology and particle physics, particularly the search for dark matter. Before joining the group in 2022 as an MSc student, I was an undergraduate at the Chinese University of Hong Kong. I worked on several theoretical and data analysis projects related to the indirect detection of dark matter. Outside physics, I enjoy football, hiking and water sports, as well as classical and indie music. More recently, I grew my interest in food and wine tasting. I am a McGill physics undergrad from Kentucky, USA. I am interested in theoretical/computational astrophysics. Outside of school I like board games, yoga, and mountain biking. I am an undergraduate student at the Indian Institute Technology Kanpur majoring in Physics. I joined the group for summer 2022 as a MITACS Globalink Research Intern. My primary research interests include astrophysics and cosmology and their intersection with particle physics including but not limited to the search for dark matter, dark energy and physics beyond the Standard Model. When not engaged in research, I like to spend my time watching and playing different sports or playing my guitar. I am a graduate student and NSF Graduate Research Fellow at the MIT Department of Physics. I study cosmology and particle astrophysics with Tracy Slatyer’s group at the Center for Theoretical Physics, with a special interest in the nature of dark matter and its effects on the early universe. When I’m not thinking about physics, I dabble in classical music and digital painting. Before starting my PhD at MIT, I obtained my Bachelors degree at the University of Chicago, where I explored a range of projects from collider experiment to particle phenomenology and theory. Now I am most interested in hunting the dark matter through novel astrophysical signatures and its imprints on the cosmological evolution history of the universe. I’m also interested in using Machine Learning to accelerate data analysis and simulation in my research. When I’m not doing physics, I enjoy putting on music and catching up with the local hip-hop dance scene. Calvin Leung is an NDSEG Fellow at MIT pursuing a PhD in physics. He enjoys thinking about unconventional probes of new physics and cosmology. His PhD thesis focuses on using CHIME/FRB Outriggers to localize the world's largest sample of fast radio bursts, in order to measure their redshifts and unlock their potential as cosmological probes. Within the CHIME/FRB collaboration he is also leading the search for gravitationally-lensed FRBs using CHIME/FRB. In his spare time he enjoys playing the cello and cooking large quantities of food. I’m a graduate student at UC Berkeley working on heavy-ion physics in Barbara Jacak’s group. Currently, I’m working on jet substructure studies with the ALICE experiment. I completed my undergrad at MIT in 2021, and was involved in a variety of projects there studying neutrino-nucleus interactions, axion dark matter, and nuclear structure. I’m a fan of particle physics, good food, and literature in translation. Copyright © 2021 Katelin Schutz - All Rights Reserved.
OPCFW_CODE
Help for reinstall and Custom UI now not working please First, I apologise if this has been answered elsewhere, but I did searches, I browsed the help forum, and even though people had similar problems, either no one responded with a fix, or the suggested fix did not work for me. I did a reinstall of EQ2 today (from the TSO discs, not SF discs) as I was trying to change my UI from Profit to Darq, trying everything to get Darq to work, and Profit to stop, and messed everything up. So I figured a fresh reinstall of EQ2 would be the best idea. I run Win7 64bit, and so the game installed to the Program Files (x86) folder (and not to ProgramData - I checked). I also double checked the path of the application icon to make sure it was in the same location. I logged into the game without any UI installed, as I wanted to both look at the new UI (I hadnt seen the SF update - Ive always used Profit), and to make sure the installtion worked, and setup the performance, size, windows mode, etc under Options. I then logged out, deleted that characters profile, then installed Darq, following its instructions. It was still the standard UI. I logged out, went to delete the eq2.ini file but couldn't find one, I deleted the char file again, ran the Darq exe file again, making sure the path was correct as per the instructions, but still no go. I deleted the Darq folder under UI, deleted several config files under the Everquest 2 folder that had eq2 in front of it and relaunched EQ2. Once it patched, I closed it down and didnt go ingame, but it did not patch an eq2.ini file, that was missing. I then downloaded the Profit loader, followed all those instructions, but still no go. Rinse repeat deleting files, and decided to install individual UI chances, so I created a UI folder, put them all in, created an eq2.ini file (following the instructions on this site under the FAQ section) and I double checked to make sure I did it all correctly as per the FAQ instructions. But yet again still no go, the EQ2.ini file is there (and yup the one I created), I even tried /loadui Custom (yup its what I named my Custom UI folder under the UI folder) but no changes. I am now at a complete loss, I have uninstalled and reinstalled new UIs many times prior to the UI changes recently made (to see if any newer ones are better than Profit but so far none are IMO LOL). Any advice would be great. Thanks in advance
OPCFW_CODE
Java in a Nutshell By David Flanagan; 1-56592-262-X, 628 pages. 2nd Edition, May 1997 - Table of Contents - Part I: Introducing Java Part I is an introduction to Java and Java programming. If you know how to program in C or C++, these chapters teach you everything you need to know to start programming with Java. If you are already familiar with Java 1.0 you may want to just skip ahead to Part II, which introduces the new features of Java 1.1. Chapter 1: Getting Started with Java Chapter 2: How Java Differs from C Chapter 3: Classes and Objects in Java - Part II: Introducing Java 1.1 The two chapters in this part introduce the new features of Java 1.1. Chapter 4 is an overview of the new APIs, and Chapter 5 explains the new language syntax. See Part III for some examples of the new features. Chapter 4: What's New in Java 1.1 Chapter 5: Inner Classes and Other New Language Features - Part III: Programming with the Java 1.1 API Part III contains examples of programming with the new features of Java 1.1. You can study and learn from the examples, and you should feel free to adapt them for use in your own programs. The examples shown in these chapters may be downloaded from the Internet. Some of the chapters in this part also contain tables and other reference material for new features in Java 1.1. Part III of this book is "deprecated." Most of the examples from the first edition of this book do not appear here, and Part III may disappear altogether in the next edition of the book. Unfortunately, as Java continues to grow, there is less and less room for programming examples in this book. However, all of the examples from the first edition are still available on the Web page listed above. Chapter 6: Applets Chapter 7: Events Chapter 8: New AWT Features Chapter 9: Object Serialization Chapter 10: Java Beans Chapter 11: Internationalization Chapter 12: Reflection - Part IV: Java Language Reference Part IV contains reference material on the Java language and related topics. Chapter 13 contains a number of useful summary tables of Java syntax. Chapter 14 describes the standard Java system properties and how to use them. Chapter 15 covers the syntax of the HTML tags that allow you to include Java applets in Web pages. Chapter 16 documents the command-line syntax for the Java compiler, interpreter, and other tools shipped with the JDK. Chapter 13: Java Syntax Chapter 14: System Properties Chapter 15: Java-Related HTML Tags Chapter 16: JDK Tools - Part V: API Quick Reference Part V is the real heart of this book: quick-reference material for the Java API. Please read the following section, How to Use This Quick Reference, to learn how to get the most out of this material. How to Use This Quick Reference Chapter 17: The java.applet Package Chapter 18: The java.awt Package Chapter 19: The java.awt.datatransfer Package Chapter 20: The java.awt.event Package Chapter 21: The java.awt.image Package Chapter 22: The java.awt.peer Package Chapter 23: The java.beans Package Chapter 24: The java.io Package Chapter 25: The java.lang Package Chapter 26: The java.lang.reflect Package Chapter 27: The java.math Package Chapter 28: The java.net Package Chapter 29: The java.text Package Chapter 30: The java.util Package Chapter 31: The java.util.zip Package Chapter 32: Class, Method, and Field Index Warning: this directory includes long filenames which may confuse some older operating systems (notably Windows 3.1). Search the text of Java in a Nutshell. Copyright © 1996, 1997 O'Reilly & Associates. All Rights Reserved.
OPCFW_CODE
Several months ago I read a tutorial on module creation. It got me thinking about releasing some of my modules. I got to work getting my code organized. At the time I had all of my work in the directory for my site. So I moved my general purpose modules to their own directory and then started reading more about what is needed to get a module published on CPAN. I first installed Module::Starter. It seemed like a good place to start, but then Dist::Zilla was suggested, so I installed it. Most recently Minilla suggested, and now it is installed. The problem is, I do not know which one to use. Do I use any of those at all, or is there yet another packaging module (with executable) out there? Module::Starter (module-starter) appears to be for those starting with well thought out plans for what they want to write. I have been told Dist::Zilla (dzil) is not for a first time releaser. Minilla appears to be for those who have a complete package written and just need to get it compressed to send to CPAN. I have also read Release::Checklist. I know after the code, documentation, and tests are written; the rest should take maybe half an hour or less. I have lost all confidence I had several months ago when I thought it would be easy. I am drowning in an ocean of documentation that most might find to be only a wading pool. I am reaching out and grasping at whatever scrap I can find that might match my specific circumstances, but I keep getting bogged down. I am fairly sure releasing modules to CPAN should not cause tearful emotional breakdowns on a semi-regular basis, especially when it is just one module with less than 30 lines of code to it. My circumstance is I have around 60 free floating .pm files I am thinking about publishing to CPAN. All have plain old documentation in various states of completion. All have at least their first tests written. (Only 2 have more than the use_ok test in a The module I want to release first is Fancy::Open. It has a separate .pod file and one test file. Where do I go from here? I am willing to do this all "by-hand" if need be since this is my very first package and release. Apologies if this is not well written. I am so confused, but I do not want to give up like I have on other aspects of Perl. I have sought wisdom on PerlMonks. In my last post, a meta issue for modules: bug tracking, I had noticed a problem with the bug tracking link for a module and discussed that problem. In the comments, one person said he preferred rt.cpan.org. I began thinking about where to have bugs tracked for my modules. Since I have not published one yet, this is something I would like to know. I would like to know the good and bad and ugly of the various systems to make a more educated choice on issue tracking before my first release. Are there specific issues with GitHub's, GitLab's, or other issue tracking systems making rt.cpan.org the more attractive choice? On a side note, I prefer reporting issues on sites like GitHub and GitLab since my reply email is hidden and does not get spammed, or at least not yet. However, my cpan.org email address gets a lot of spam, so much spam I had to make a rule to send all email I receive through that address to junk mail. So, should I receive a reply to an issue I opened on rt.cpan, I may miss it since it ends up in my junk mail, which I do not check that often. Where do you like bugs reported and why? I was reading a module on meta::cpan when I spied a small issue. I went up to the Issues link, clicked, and was sent to rt.cpan. I know that many module authors now have their modules on sites like GitHub, GitLab, or Bitbucket. Before I posted the issue on rt.cpan, I checked the author's profile for a linked account to one of the other sites. I found the module on GitHub and read the CONTRIBUTING.md to find the author does want issues reported there and not rt.cpan. I did not report my original issue, I reported the link issue instead as it seemed more important. Today is not the first time I noticed this issue with a module's bug tracking. Before continuing, I have not released a module to CPAN and am still learning all that goes into releasing one. Please be gentle if I am wrong or stating an obvious well known fact. I read one of the META.json bugtracker can be set to a URL on the preferred platform. So, have CPAN authors checked their META.json files to make sure issues are being linked to the right place by meta::cpan? For those modules written before the use of other sites for bug tracking, will issues on rt.cpan get ported over to the new bug tracking location? Should there be a link to rt.cpan somewhere in the documentation for older modules whose issues have been moved away from it? Issues closed as a wontfix, for example, on rt.cpan could rear their heads on the new bug tracking platform possibly making more work for the modules' maintainers if not noted somewhere. There could be duplicate issues in both places too. One known problem with porting issues from rt.cpan to another platform could be problematic because of the use of different user names by issue reporters and maybe even module maintainers. For issues I have raised on rt.cpan, my user name is "ALEENA"; for issues on GitHub, my user name is "LadyAleena". However, both of those identities are connected through my meta::cpan account, so that could be used to resolve user name mismatches. To all those who have modules on meta::cpan, does it link your bug tracking to the correct location? A little over a month ago I learned about the Perl Weekly Challenges. The site states the challenges are for any skill level. So, I went and took a look. After looking at the first challenge that week, I realized “any skill level” did not mean my skill level. My skill level is pretty basic. I can … - open, read, and close text files and do simple manipulation of the data. - add, subtract, multiply, and divide when it comes to math. - tack on words or phrases to the beginnings or ends of strings okay with loops. - write some basic regexen. - even roll things randomly. - do most of the above conditionally. … that is about it. I read the challenges and my mind is totally blank on where to start after… I wish I could grasp the concepts in the Perl Weekly Challenges, especially the math. I have not taken a math class in over 30 years, and what math I remember is, as I said, pretty basic. Oh, and one needs to be more than a little familiar with Git and GitHub to contribute, which I am not.
OPCFW_CODE
cue/cmd: Prevent exec.Run fatal errors for non-zero exit codes This updates the logic in exec.Run to not return an error if a command returns a non-zero exit status. As a result, the success field can be used to determine if a command succeeded or failed. Any errors that aren't related to the command failing to run or not completing successfully are still returned. Fixes #2632 been learning cue and this issue sounded interesting, so wanted to give it a shot. Let me know if I misunderstood the intent here at all 😃 @nickfiggins apologies for the delay - we have discussed a number of options and to what degree we want to keep backwards compatibility here. Run is documented as follows, which would indicate that a command run isn't required to succeed: success is set to true when the process terminates with a zero exit code or false otherwise. The user can explicitly specify the value force a fatal error if the desired success code is not reached. However, the currently implemented behavior has been in place for years, and some of those users may depend on it, so asking them to switch to a new API or set an option would be unfortunate and might catch some users by surprise. Moreover, going ahead, we think that the default should be to require a command to succeed to continue the execution of cue cmd, much like CI systems like GitHub Actions run the Bash shell with set -o errexit, where any command failure stops the entire script. So, following @rogpeppe's thoughts above, we're thinking of adding a boolean flag: // Run continues to result in a fatal error if success==false, and we add the "mustSucceed" field. Run: { // rest of fields unchanged, including "success" // mustSucceed indicates whether a command must succeed, in which case success==false results in a fatal error. // This option is enabled by default, but may be disabled to control what is done when a command execution fails. mustSucceed: bool | *true } Existing users would continue to do exactly the same, for example: cannotFail: exec.Run & {cmd: "foo"} And then, users who want to use the "success" boolean as either true or false can set "mustSucceed" to false: canFail: exec.Run & {cmd: "bar", mustSucceed: false} if canFail.success { // logic based on the resulting status } Let me know if the above is clear. If you are able to update the PR this week, we will attempt to include this change in the upcoming v0.7.0 release. Thanks! @mvdan Thanks for all the detail and explanation, just pushed the changes 🙂 . Let me know if there's any issues Looking more closely at your latest commit: Any errors that aren't related to the command failing to run or not completing successfully are always still returned. I tend to disagree there; it still sounds useful to use mustSucceed: false to capture other types of exec errors, such as when a program isn't available like cmd: "non-existent-program". It would also be rather surprising if mustSucceed: false allowed some failures to continue evaluation, but others not. I'll tweak your code so that it treats non-ExitError errors with the same logic, and appends their error strings to Run.stderr, so that they may be showed to the end user or used in some way by the CUE logic. We could later add more fields to Run to differentiate "command failed to start" versus "command started and then exited with a non-zero exit code" errors, but for now that doesn't seem particularly important. I'll add myself as co-author in the commit, for the sake of clarifying that there were non-trivial tweaks. I'll also push another commit here, so that one can diff between the two. I'm again hoping that this is OK with you, for the sake of including the fix in v0.7 :) Done; you can look through https://github.com/cue-lang/cue/compare/5dd19902850e7d41fcd241c9d9ba539d992627f5..a454651883a739e0109c3399c7197e2908c11c58 to see my changes. There is a bit of other noise unfortunately, since I also rebased on master. Importing to Gerrit now, for one last round of review. I tend to disagree there; it still sounds useful to use mustSucceed: false to capture other types of exec errors, such as when a program isn't available like cmd: "non-existent-program". It would also be rather surprising if mustSucceed: false allowed some failures to continue evaluation, but others not. That totally makes sense, I was going to ask for clarity around that but figured there wasn't much time if it were to be included in the next release 🙂 . Done; you can look through https://github.com/cue-lang/cue/compare/5dd19902850e7d41fcd241c9d9ba539d992627f5..a454651883a739e0109c3399c7197e2908c11c58 to see my changes. There is a bit of other noise unfortunately, since I also rebased on master. Importing to Gerrit now, for one last round of review. Ah I see, this makes sense. Thank you! I ended running out of time to cut a release today, so it will have to be monday. Perhaps there wasn't a need for me to rush merging this after all :) In any case, if you have any further thoughts about the design, or want to send a follow-up PR with any other changes, by all means please do!
GITHUB_ARCHIVE
How do I get Eclipse to show the entire javadoc for a class Can Eclipse show the entire javadoc? i.e. all methods (and their descriptions), when I highlight an object reference? For example, if I do System, it shows me java.lang.System The System class contains several useful class fields and methods. It cannot be instantiated. Among the facilities provided by the System class are standard input, standard output, and error output streams; access to externally defined properties and environment variables; a means of loading files and libraries; and a utility method for quickly copying a portion of an array. Since: JDK1.0 but that tells me very little about what I can do with that. Yes, I can use the intelligent dot completion, but that seems a bit cumbersome. If you load source into the classpath you can F3 into the class and read the code. Not the same as Javadoc but partway there... or I could do "open attached javadoc in browser" (shift-f2), but that negates the value of havingthe javadoc separate from the editing environment (browser seems to open a tab in the editing area) To see the javadoc of a class having attached source in eclipse: select Window -> show view -> javadoc (or alt + shift + q, j). then in the javadoc view, right click -> open attached javadoc (or shift + F2), this will display the javadoc of the class in the internal browser of eclipse. shortcut key is Alt + Shift + q, j on my instance doesn't really help me with how I want. I dont want to have to attach the source for the entire jre.... Ah, that shortcut in i3 kills the window. Eclipse 4.31.0 shows javadoc windows, but doesn't show the content. Do you know something about? In the source editor hover mouse over some method or class name. Then press Shift-F2 (Open External Documentation) Javadoc View will open the full documentation at the right position. Eclipse 4.31.0 Shows javadoc windows, but doesn't show the content. Do you know something about? If you have source attached to the classes (Both Java & custom) then you can see it in the Outline view by pressing F3. Haaa..i guess shonky linux user added his comment before i pressed submit. Ignore mine if it is duplicate.. What you need to do is to hover mouse over some in-built method or class that are using in your program. Hold ctrl on your keyboard and click. Then eclipse asks you to attach source. click on 'attach source', browse for src.zip file after choosing 'EXTRENAL' file. or instead give path for extracted src folder under same ie external file attachment. Next time you hover over an in-built class or method it shows a small description. To view entire javadoc for same, keep holding ctrl and click on it. It worked for me. I know that this post is old. Nevertheless I would like to add a procedure for showing Javadoc in an easy way. I use eclipse oxygen 4.7.1a which is an year 2017 build. -> Just hover over any declaration/method/object/class and press F2 to focus and now at the bottom of the tooltip that appears, we can find icons/buttons that can take us to its Javadoc in browser or eclipse view. shift+F2 directly opens the javadoc of the entity you are hovering over, in a browser in the eclipse. For this method to work a javadoc needs to be attached to your source. See linking javadoc.
STACK_EXCHANGE
As mentioned in the getting started guide, GLSP is architected to be very flexible and provides several options. On this page, we give an overview of the dedicated integration components and point to the respective source code. Due to GLSP’s architecture, you can even change any of those options above later on, without impacting other parts of your implementation, or support multiple variants, e.g. VS Code and Eclipse RCP, while sharing almost all of your server and client code. There are many options to choose from. In the following, we list a few hints to help you decide. Please note that especially the tool platform integration doesn’t have to be an ultimate decision. Many adopters start deploying for one tool platform, e.g. Eclipse Theia, but add support for VS Code later and offer both options in parallel. Whether to use Java or Typescript is a matter of taste. However, there also are objective considerations. The choice of a framework to manage your source model mostly depends on two things: Besides, there are a few more considerations. More information on the integration components is given in the section on source model integrations. The decision for a tool platform has many aspects, such as are you providing a product or a plugin for a generic tool, such as VS Code, are your users already using a certain tool platform, etc.? However, the integration layer of GLSP editors for certain tools is rather thin and it is not much work to provide multiple options here in parallel. So choose what’s best for you now, you can easily change or add a tool platform support later. More information on the integration components is given in the section on platform integrations. Depending on your choice of tool platform integration and server framework, a different selection of packages needs to be used. The project templates linked in the getting started guide provide the initial setup of the package architecture for the respective combination of components. However, all of them will have a diagram-specific client package that depends on @glsp/client and a diagram-specific server package that either depends on the node-based GLSP server framework or the Java-based GLSP server framework. Irrespectively of the used tool platform integration, server framework or source model integration, your custom glsp-client is always the same and can be reused for all scenarios. Your server implementation is also independent of the respective platform integration and reusable for multiple platforms. Depending on the source model framework, the server may add additional dependencies (e.g. to use the EMF.cloud model server client). As an example, the following figure shows the package architecture for a Theia-based GLSP editor with a node-based server. Package overview for node-based server and Theia integration The package your-glsp-client represents your custom client package and your-glsp-server depicts your custom GLSP server package. They contain the diagram-specific implementations for your diagram editor and modeling language. Please note how the your-glsp-client builds upon the @glsp/client and the package your-theia-integration just integrates this as an editor based on @glsp/theia-integration into the Theia tool platform. Your GLSP client and your Theia integration have an indirect dependency to Eclipse Sprotty and its Theia glue code. Both the client and the server share a common package @glsp/protocol that defines the action types. GLSP servers can be written in any language, as they run in a separate process and communicate via JSON-RCP with the client. To make it easier to develop GLSP servers, however, GLSP provides two server frameworks: Even though they are built with different runtimes and languages, they are structurally very similar. Both use dependency injection (DI) for hooking up your diagram-specific providers, services, and handlers or for replacing default implementation with customized implementations. The Java-based GLSP server uses Google Guice as a dependency injection framework. With Google Guice, there is one main DI module that contains each binding in a dedicated method. Adopters can extend this module and customize it by overriding dedicated binding methods. The node-based GLSP server uses inversify.js as dependency injection framework. For both servers, GLSP provides dedicated abstract base classes named DiagramModule, which are intended to be extended in order to implement a concrete diagram server. The idea of those abstract base classes is that the abstract methods they contain MUST be implemented in order to show a diagram, e.g. the source model storage and the graphical model factory, and additional methods MAY be overwritten to add functionalities, such as certain editing operations or model validation, or to customize default behavior. There are also pre-configured diagram modules for certain source models, described below, e.g. for EMF or EMF.cloud, which already bind relevant implementations. The remainder of this documentation shows, whenever applicable, a code example for both servers. Also there are project templates for both servers, as listed in the getting started guide, as well as an example server for the common “workflow diagram”, in each of the server repositories, linked above. It is worth noting, that GLSP servers distinguish between two DI containers: GLSP works with any source model format or framework for managing your source model, as the implementation for loading source models and translating them into diagrams needs to be provided by the developer of the diagram editor. However, there are recurring popular choices, for which GLSP provides base modules with default implementations for a specific source model framework. GLSP-based editors can be integrated into any web application frame. To ease the platform integration for adopters, however, dedicated glue code frameworks are provided for In general it is recommended to keep the GLSP diagram implementation separated from the platform integration code by splitting them into separate packages. With that, the core GLSP editor can be easily reused and integrated into another platform. As an example, the GLSP Workflow example provides the GLSP diagram implementation in the @eclipse-glsp/workflow-glsp package. All platform-specific integration examples import this package and provide a small integration package containing the platform-specific glue code on top. The Eclipse Graphical Language Server Platform is a project hosted at the Eclipse Foundation, led by Philip Langer & Tobias Ortmayr, organized within the Eclipse Cloud Development project.
OPCFW_CODE
AI studies the laws of intelligence, like physics studies the laws of nature. A more accurate definition of AI will bring a better understanding of its role & impact. With the many definitions of AI around — who needs an extra one? I think we do: current definitions introduce many confusions by incorrectly framing Artificial Intelligence. At the Artificial Intelligence Lab Brussels, the university research centre founded in 1983 where I work, we typically define AI as follows: Artificial Intelligence is a scientific field that (1) studies the nature and mechanisms of intelligence, (2) formalises its findings using mathematics and (3) implements it using computer science. We thus identify three main ingredients in AI: - The first pillar is philosophical — attempting to understand what intelligence is and how it manifests itself. - The second pillar is formalisation — describing intelligent behaviour with mathematical symbols: “explaining” it to a computer, you could say. The design of algorithms belongs in this category. - The third and last pillar is engineering or computer science — how to implement the algorithms and build systems that behave intelligently. Picture the three levels or pillars as baking a cake. At the philosophical level, you need to know what ingredients go well together and define a cake concept. At the formalisation phase, you describe the cake more rigorously using a recipe (an algorithm in computer science terms). Finally, making and baking it — the real test, let’s say — is the implementation part. The essential breakthroughs happen at the philosophical level. They often involve insights from many disciplines, ranging from sociology, biology, sociology or mathematics to neuroscience. AI lends from but also contributes significantly to other scientific domains. AI is thus a truly interdisciplinary field. Most academic research is done at the mathematical level. Researchers at conferences present new algorithms to solve a new kind of task or perform existing tasks more efficiently. Of course, no real system can be built without innovations at the computer science level. Object-oriented programming, for example, was developed partly in the context of AI: researchers were looking for ways to represent reality with all its relations and complexity, using abstract data types. Finally, computing hardware plays an equally important role, as it is the platform on which the algorithms run. Though chess playing software was invented in 1950 by Shannon, it took up to 1997 before computers had enough memory and computing power to beat the world champion. The importance of recognizing these three levels cannot be understated: a good understanding at all levels is necessary to assess AI’s impact and design systems that are trustworthy and robust. For example, some things may be easy to do for humans but very hard for computers (take: commonsense reasoning, one of the major limiting factors in current AI systems). Knowledge at implementation levels is thus needed to understand the limits of AI systems. However, this is not enough! A good understanding of the mathematics behind the algorithms that AI designers use, and the philosophy behind them, is crucial to understand the impact once these systems are deployed in real-world scenarios. Failure to do so has already led to many unwanted side effects, including bias. This is one of the reasons why AI is a domain that is hard to grasp and difficult to implement: it requires its practitioners to possess a broad skill set, from abstract reasoning at a philosophical level, over mathematical ones to hard coding skills. And, of course, knowledge of the domain one applies AI to and the potential ethical & social impact! Furthermore, AI is not a fixed, single concept or a technology. It is a toolbox of concepts, mental models, techniques, software and methodologies. It is sometimes called a “general-purpose technology”. A helpful metaphor to keep in mind is that of an electrical motor. It is the heart of many appliances like hairdryers, mixers, beard trimmers, drills, refrigerators or cars. It does not serve a single purpose and has no clue what function it is performing. Algorithms play a similar role in computer systems, though they manipulate data structures rather than mechanical structures. They can thus be compared to the machines on a production line, adding value to the product. The fact that these numbers can represent anything — gender, salary, psychological trait, pixel, email — makes them extremely powerful. AI studies intelligence in principle, which means that we are not trying to reproduce human intelligence per se: rather than trying to build a replica of a bird, we are interested in understanding flying. This means writing down the principles of flight mathematically, to then building a flying machine. We thus see that AI has two primary purposes: - creating a better understanding of “intelligence, using computer science and mathematics as research tools; - building intelligent systems. A small note on the word “intelligence” that bothers quite some people (including me, originally). One should not take this word too literal. To quote Edsger Dijkstra, one of the founding fathers of Computer Science: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” — Edsger Dijkstra Indeed, in the research community, with “intelligence”, we typically refer to tasks that cannot be solved yet with traditional techniques or have properties that we attribute to intelligent beings. It is a moving target — once a task is “cracked” by an algorithm (e.g. route planning or playing chess), it is typically no longer called AI (though it is). Actually, the best way to understand the “I” in AI is to think of it as the quest shared by its researchers to come to a better understanding of the world. It can best be understood as a homage to (human/animal/natural) intelligence, acknowledging the world’s complexity and mysteries. In the AI scientific domain, most interest does not go to issues like “whether robots will dominate humans”, but to more down-to-earth, but far from mundane and much more interesting questions like: - How do humans grasp objects like a tomato? (we employ tactile feedback, have a memory of the substance, we do apply physical simulation of gravity) - What distance do we keep when talking to someone? (it depends on the context, on our relationship with the person) - “The city councilmen refused the demonstrators a permit because they feared violence.” — to who refers the word they? (these questions are called “Winograd schemas” and show that a fundamental understanding of natural language is still an enormous challenge) By framing AI as a research field, it becomes evident that our work is far from finished and that many questions remain unanswered.
OPCFW_CODE
Too many ways to set and read the branch.io keys (updated, was: Please remove hardcoding the _branchKey in BranchiOSWrapper.mm) Hi, handling the branch key(s) is not implemented straight forward in the branch.io Unity SDK. I understand you wanted to make the Unity SDK as easy as possible for single Unity developers, but using the current SDK in a professional and fully automated build pipeline is very cumbersome. The first main issue is: The key can be set by multiple methods and it's read from different sources: Setting the key: Serialized via Unity Editor into the BranchData asset Replaced via the PostBuildScript directly within the BranchiOSWrapper (see below) Manually or via PostBuildScript in the Info.plist (there it's also possible to set it in two different ways: as a dict for both test and live keys OR as a single string key only). Reading the key: Unity is reading the key in Branch.cs's Awake method A better practice would be to hand over the key in the initSession method, in order to be able to define it during runtime. Important for automated build pipelines. The key can also be stored in the Branch.cs MonoBehaviour directly, which would allow to get rid of the BranchData.cs class completely (you don't really need it). Also the BranchData class contains a static constructor, which is a bad practice and Unity 5.4 will throw compiler errors if you`re doing something like Resources.Load() or FindObject...() in it! iOS / ObjC is reading the key from it's BranchiOSWrapper.mm's *_branchKey variable iOS / ObjC is also reading it from the Info.plist You see, that's pretty puzzling. In my optimal world, I'd love to see branch.io integrating like this: (C#/Unity pseudo code): #if DEBUG Branch.initSession(Branch.TestKey, callback); #else Branch.initSession(Branch.LiveKey, callback); #endif ..where Branch.xxKey is serialized into the according script/scene and initSession is a fictitious method name. This also allows for setting key values which come from a totally different origin (--> balancing data for games etc.) Please remove hardcoding the _branchKey in BranchiOSWrapper.mm It should not be necessary to hardcode the branch key in BranchiOSWrapper.mm like this: https://github.com/BranchMetrics/Unity-Deferred-Deep-Linking-SDK/blob/1a74f0c696a7effc0701c033654dd53e405d29b1/BranchUnityTestBed/Assets/Branch/Editor/BranchEditor.cs#L50 This adds a lot of overhead when having an automated build pipeline - we would have to replace the key in the BranchiOSWrapper.mm everytime we want to switch between live and test et cetera. Especially because the branch keys (both test and live) are already stored in the Info.plist (according to https://dev.branch.io/getting-started/integration-testing/guide/ios/#specifying-both-test-and-live-keys-in-your-app) - you should read them from there and get completely rid of the changes made regarding setting/updating the branch key from within the Unity editor in 1a74f0c696a7effc0701c033654dd53e405d29b1. Hi derFunk, thanks for the report! We added hardcoded key to fix cold launch trouble, when key isn't initialised from Unity (sorry but we can't manage order of calls under Unity). Initialisation of Unity Branch SDK is different than native iOS SDK because our plugin supports Android too, so we make Unity realisation without platform specific things (you don't need to keep keys in plist). It will be very helpful if you will describe your auto-building and how we can change our Unity SDK to support your pipeline. If you can send us test project with auto-building then it will be great! Nevertheless we will think how we can remove hardcoded key and I think we will change this in the next version of SDK. Hi Anton, thanks again for the fast response (wow, it's Sunday! :) ) Short roundup about our pipeline: Our automated build pipeline basically follows the convention that we want to be able to configure every plugin setting manually and centrally. That is because we're integrating more than 5 different plugins into almost every project/game, in average. We're totally avoiding PostBuildProcesses and stripping them out as soon as we see them, to be able to keep track what configuration and dependenvies every plugin needs. We also don't necessarily want to configure plugin keys in Editor Windows (but I understand this is comfortable for the average dev). Configuration in code is just fine (which does not necessarily mean that everything is hardcoded, instead it's in different datastores which get loaded at runtime). We don't want the plugins to just "do magic" without us knowing what exactly is happening (as this can lead to hard to resolve problems with the builds, and we need the builds to be deterministic). To be deterministic, even the Xcode projects are generated completely new for every build (no Unity "Xcode append" etc). All plugin configuration values are coming from external data stores (so called "balancing data" for example). Normally we're having at least 2 different accounts for every plugin, which we want to be able to switch at build time (because we're working with partners a lot which often give us access to the plugin backends only late in the project phase, and we have to be able to test early, that's where we're using our own accounts). We do this by having a "build parameter" (e.g. a preprocessor define) which just tells the code at build time which configuration to load (our keys or the ones from our partners/publishers). That's why it's important to have a very central point to configure the plugin keys in code. Having a plugin which loads it's configuration via "static constructor magic" or within the Awake methods of Unity are making our life harder, as at the same time we don't want to change the plugin's code, as it makes plugin updates a pain (checking the diffs.., documenting what we changed.. et cetera ;) ). Hope this all makes sense to you ;) derFunk, We will think how to make our plugin better, but we need time. Sorry, but right now you need to use "hard way" to set keys :) (May be light post-processing will help you?) Nevertheless I added this issue to dev plan, thanks! Added to dev plan.
GITHUB_ARCHIVE
Opening a new issue to discuss better about maintaing Python 2 and 3 compatibility. Some problems and possible solutions: -Renamed modules: various modules was renamed on Python 3 (every module that started with Uppercase on Python 2 standard library was renamed to lowercase, and some modules like ttk was put on the group tkinter.ttk for example). This is relatively easy to fix using Try...Except ImportError, but maybe putting this on every module that needs it is not the cleanest way. -Deleted built-ins: this is easy, we should only not use any of the following functions: apply, cmp, coerce, execfile, file, long, raw_input, reduce, reload, unicode, xrange, StandardError. Some of them were renamed (like raw_input was renamed to input), while others are not built-in anymore (reload is a function from imp module) and others are simply removed. On the cases where the module was renamed we can do the following: try: input = raw_input except NameError: pass -Print as a function: this is easy: just don't use print "string" anymore and instead use print("string"). To guarantee that no code uses the old type of print you can include: from __future__ import print_function -Divisions: Python 2 returned a int number on a division if both operands are int too (like 3/4 was 0 and not 0.75 like expected). Python 3 always returns the float, and introduces a new // if you want int division (this works on Python 2 too). To garantee that no code uses the old division you can include: from __future__ import division -Absolute imports: Python 3 doesn't allow implicit relative imports anymore. Every relative import should be explicit: On this example I can't include moduleY from moduleX using: It should use the following: from subpackage1 import moduleY To get the same behavior on Python 2: from __future__ import absolute_imports -Unicode literals: This is the most important and most trick one. Python 3 use unicode literals while Python 2 use strings. So we should explicit use unicode strings so we have the same behavior on Python 2 and 3. We can convert every literal to unicode using: from __future__ import unicode_literals The problem is, some Python 2 modules expect str instead of unicode. And some things is broken with above statement on Python 2 (e.g. unicode docstrings). See http://python-future.org/imports.html#unicode-literals for details. The second option is to explicit mark each string with unicode, e.g: print(u'Hi, I am a unicode string!') But this should only work on Python 3.3+ (http://docs.python.org/3/whatsnew/3.3.html). Things to discuss: -Minimum supported version for both Python 2 and Python 3. We already support only Python 2.7 because of ttk and argparse (argparse is available for older versions as a PyPi module, ttk I think not), but maybe there is some interesting on supporting older versions of Python. Just remember, while it's relatively easy to support Python 3 compatibility with Python 2.6/2.7, going to lower versions is a PITA (no __future__ statements for example). For Python 3 itself, the lowest version that we supports now is probably Python 3.2 (don't take my word at it, I only tested shim with Python 3.3), since we use argparse module that was introduced on this version. But argparse exists on PyPi, so in theory we can support Python 3.1 too, but I don't think it's worth it. For more info about maintaing Python compatibility between major versions, see http://lucumr.pocoo.org/2013/5/21/porting-to-python-3-redux/. -Automated tests. Ideally we should have automated tests to every major Python version we support (i.e. Python 2.7/3.2/3.3/3.4). -To use or not to use __future__ statements. Excluding the unicode problem, every other __future__ import are only put to garantee that no one writing code/testing on Python 2 uses incompatible versions of the code, since these funtionality exists on Python 2. The recommended would to put the following import on every file: from __future__ import absolute_import, division, print_function And maybe unicode_literals too if we do use it. -Another option would be to use six (https://pythonhosted.org/six/) module that takes care of the majority of the problems, but if we only need to support relatively new versions of Python 2 and 3 we should not needed it (six is recommended if you do need to support ancient versions of Python like 2.5 while maintaing compatibility with Python 3). python-future is another module that does the same, but aparently has a better documentation (http://python-future.org/quickstart.html). Since the code is new, we could simple add the following to every module: from __future__ import (absolute_import, division, from future.builtins import * And starting writing Python 3 code from now. -Yet another option would be drop Python 2 or Python 3 (please not) support so we don't need to deal with these problems. Some nice links: -Even python-future doesn't support Python <=3.2 so we should stick with Python >=3.3. -For Python 2.x series we should support only Python 2.7, that is 4 years old. -To make things simple, we should't use six (it does too many things), but maybe python-future would be nice. It's a very nice library btw. -If we need to drop a major version, I would prefer to drop Python 2. It simply too old and all Python development is happening on Python 3 (for example, new modules to standard library and language features). Thanks for the writeup. The only reason I'd be hesitant to drop python 2 completely would be because Macs come default loaded with python 2.7. However, I think dropping Python 2 support would be the best option. It seems like the programming overhead associated with what should eventually (and potentially already is) a completely obsolete version of Python far outweighs the benefits of maintaining support for people using 2.7. For us it's potentially lots of headbanging and messier looking code and for users its the time saved installing python 3 (which is extremely easy). I'm all for simplicity and I really don't see the point in sacrificing that in order to make installation one command shorter. I was a little hesitant because Macs come loaded default with python 2.7 and being stuck in my little college bubble I assumed 2.7 would be the safer option to go with. Unless theres some super compelling reason to keep Python 2 support, I'd be completely fine with just scrapping Python 2 support if everyones fine with it. https://github.com/kaaedit/kaa is python 3 only and its' codebase is awesome. If you need some advice on python 2.7 compatibility I can be of assistance. But I think sticking with python 3 is sensible, many linux distros have it. No modern computer system should have a problem with a system package of python 2 and 3 side by side. Regarding the mac issue, https://github.com/Homebrew/homebrew/wiki/Homebrew-and-Python. Out of the box AFAIK - python 3 may not exist, but homebrew makes a cakewalk of it. I just had both set up last week no problem. @m45t3r : I think the most important thing you mention is http://lucumr.pocoo.org/2013/5/21/porting-to-python-3-redux/. Ronacher's _compat libs do the trick well. I just helped out at monetizeio/sqlalchemy-orm-tree#20 and implemented a compat module derived from werkzeug/jinja2/flask. If you want to support only Python 3, I completely support this choice. Python 3 code is cleaner, more consistent and faster (or so some people say). The Python 3 standard library is gaining more features and some things is better too. If we do choice to support Python 2 and 3 I think using the python-future would be the easiest choice, using something like: But the user would need to install python-future, and if the user needs to install something maybe it simple easier to say "use Python 3". If we decide to remove Python 2 support I can do it. It's easy to do, just remove some imports. Let's drop python 2 then. Go ahead and start ripping out the imports
OPCFW_CODE
If you’re taking your website or web app security seriously having strong and reliable processes and workflows is essential. Here’s an extensive and concise website security checklist that covers most critical aspects from identity and access management to CDNs and firewalls. I hope you find it helpful and it helps you create effective and well-structured security processes and procedures. Software and dependencies: - Keep all your IT systems patched and up to date (servers, CMS, frameworks, libraries, modules/plugins, 3rd party tools, services, etc.). - If you have any extensions, modules and/or plugins, make sure that: - all updates and patches are installed; - the extensions/modules/plugins are actively used by the community; - they’re coming from a trusted source; - they’re actively maintained by creators/maintainers; Connections and authentication: - Use encrypted connections for everything: - Install an SSL certificate; - Switch the website to HTTPs; - Use encrypted connections for file transport and for server/database access; - Do not forget to automatically redirect all HTTP traffic to HTTPS; - Make sure all the passwords (file transfer, CMS, Database, etc) are unique and strong. - Update passwords regularly (consider using a password manager and a strong password generator). - Enable MFA or key-based authentication where possible. Access management and permissions: - Introduce strong IAM procedures. - Audit your settings and permissions: - User settings/permissions; - Comment settings/permissions; - General visibility of information (e.g PHP or CMS error reporting that can reveal configuration details); - Follow the least privilege rule and be careful about your permissions. - Audit file and directory permissions (read, write, execute for owner, group, or public). - Prevent directory browsing. - Protect sensitive files. User data and web interfaces: - Ensure secure online checkouts. Use AVS and CVV and follow the standards such as PCI DSS. - Validate and sanitize user-entered data (Reduce XSS and SQL injection vulnerabilities). - Make sure you perform validation not only on the client-side but on the server-side too. - Enable HTTP Strict Transport Security (HSTS) to Disallow Unencrypted Traffic. - Enable Content Security Policy (CSP) to protect against XSS. - Prevent image hotlinking. - Set up extended logging and store the logs in a secure place separate from the main application - Set up automatic regular backups. Make sure you have: - offsite backup storage; - another copy of the backup in a separate location (you’re going to need this copy in case your main backup becomes corrupted or unavailable); - Test restoring the website/database from the backup to make sure the backups are usable and make sure you can do it with minimal downtime. - Audit for misconfigurations in your applications, review web server configuration files. - Move sensitive configuration files (and other files containing passwords) to a secure directory outside of the public web folder to make them inaccessible to the general web access, add them to .gitignore. - For PHP based applications: edit your php.ini file to more secure settings (e.g. set ‘register_globals’ and ‘display_error’ to Off). Intrusion detection and intrusion prevention: - Use a Web Application Firewall (WAF). - Monitor traffic surges and set up and alerts system. - Use a DDoS mitigation service. - Use CDN + load balancing for additional hight traffic resilience and DDoS protection. - Invest in a malware detector/scanner + security extensions for your CMS. Processes and workflows: - Have as many automated processes as possible: updates, logging, firewalls, monitors, scanners, etc. - Every few months: - Perform a manual audit and clean up (install missing updates, remove unused plugins/modules/extensions, remove/block unused user accounts, update permissions, settings, change passwords, etc). - Review and test all automated systems that are in place (review configurations, make sure backups and logs are being created, etc. ). - Check known vulnerability reports and stay up to date with standards and documentation. A good place to start would be keeping an eye on OWASP checklists, testing guides and OWASP top 10 security risks listed here https://owasp.org/www-project-top-ten/ I hope you find this website security checklist helpful. Feel free to reach out if there are any questions, comments or great resources you’d like to share! Other articles about cybersecurity OWASP testing guide
OPCFW_CODE
webpage expired IE8 some of our clients are experiencing a "Web Page Expired" error in IE8 on our site. I've tried it myself in IE8 but am having no problems, and have heard it might be something to do with having "Do not save encrypted pages to disk" checked under Internet options. I enabled this on my browser and still could not replicate the problem. The page in question has a form that gets auto submitted by javascript and then processed with PHP. It then redirects to another page after processing. Could something here be the root of the problem? Thanks! <script language="JavaScript" type="text/JavaScript"> function doSubmit() { this.document.autoForm.submit(); } //--> </script> </head> <body onLoad="window.setTimeout('doSubmit()', 1000)"> <form name="autoForm" method="POST" action="process.php"> <input type="hidden" name="handlerID" value="1"> </form> Side note: Don't pass strings to setTimeout, it uses eval (which is evil)! Pass functions: setTimeout(doSubmit, 1000). The "Web Page Expired" message usually occurs when you are trying to go back (via the browser's "Back" button) to a page to which you just POSTed data. Basically, imagine if you were trying to create a new user. Your New User form takes the UserName, Password, and POSTs it to the Create a User page. This page then sends you to the Welcome to the Site page. If you were to hit back, your browser would try to send you back to the Create a User page. But your browser recognizes that you just POSTed data here. "Web Page Expired" is IE's way of asking a user if they would like to re-submit their form (and create a second user), or if they would rather reload the page without any POST variables. How do you fix this? If the data that you are sending is small (it looks like it is just a number in this case), then a possible workaround would be to pass this value to process.php with GET rather than POST. GET requests traditionally do not create or delete anything, so your browser is ok with sending you there a second time. I'm not sure I understand exactly as no one is hitting the back button? It's just happening automatically, and the form that is being submitted by the JS, is being submitted to a completely different page 'process.php'. Can you describe the workflow a bit more? As I understand it, someone comes to Page1.php. The page automatically submits a form (via JavaScript) to process.php. Finally, process.php sends them to Page3.php. Is this right? Also, is there a way to figure out which page is showing the "Web Page Expired" message? Do you have access to anything that records which pages users visit? The person starts on Page1.php, this posts information to the page with the code above which we'll call Page2.php. This has a form which autosubmits to Process.php. Process.php handles the request, and redirects the user to a Page4.php. I don't have any access to the system for the user experiencing the problem, but it's a work computer using IE8, and all other work machines at that workplace are experiencing the same thing. I asked them to send me a screengrab and from that I can see that it is getting stuck on page2.php, which is the page with the auto-submitted form above. I've heard of issues similar to this (where an entire office of people is having a bizarre web problem) before: the problem is usually related to a proxy server caching all of the pages. Everyone at the office might be accessing the web through a server at the office. If the server is caching things inappropriately, then it might cause the affects that you are describing. After that, I'm out of ideas :(
STACK_EXCHANGE
As per the Github report, Python is the most popular programming language across the globe and is widely used for web development. There are several reasons to choose Python for web app development, and they have been mentioned below. Read on to find out more! 1. The language is available free of cost Developers can download it from the official website, and along with it comes its SDK, i.e., Software Development Kit. Python has a powerful toolkit that is easy to handle and the applications developed have high usability. Its API, i.e., Application Programming Interface, is quite dynamic, and the codes developed as a result are efficient and straightforward. This scenario is one of the foremost benefits of using Python. 2. This programming language offers maximum flexibility In 2022, there are three major operating systems – Windows, macOS, and Linux. Python, the programming language, is compatible with all three, and it showcases excellent trial results every time after code development. It also works well with other platforms like IBM, Solaris, and VMS. Since these systems are used worldwide, Python has gained global recognition, and its usage has been on the rise ever since. 3. Integration processes are much smoother with Python If a Python developer uses the programming language to develop an application’s backend, they can use any other language to create the interface, and the compatibility won’t be an issue. Any other framework can be used, and the application won’t look any different if it were coded using a single language. 4. The versatility of the language is relatively high The needs of the consumer are ever-changing. However, this can bring about considerable changes in the programming sector since application codes, once developed, are tedious to rectify. When Python is used for web development, one doesn’t need to change the essential components of the code; instead, they can integrate data to expand the application. Python’s open-source framework helps speed up this process to a great extent. 5. AI learning and Python The community support for Python is top-notch. One can always find a dedicated team of developers working relentlessly to fix critical issues. This scenario and the fact that the language has an enhanced tech stack leads to it being needed for AI, i.e., Artificial Intelligence development programs. Machine learning is a fast-developing field, and the developers can use Python syntax to avoid spending too much money and time understanding the language in depth. 6. Prototypes developed using Python are less time-consuming and more sustainable A robust prototype is what markets the product more effectively. Python development companies ensure that the final product’s first view convinces the client of its efficiency. Since the prototypes are developed faster, developers don’t have to put in many hours to get one function straight. This scenario reduces the company’s cost, and you can stabilize the financials over time. Client satisfaction is heightened, which is one of the main reasons to choose Python for web development. 7. Easy testing Python’s readability is one of its pros, and the developers mostly recommend the language based on this virtue. This scenario leads to completing projects on time and meeting tight deadlines efficiently. 8. Documentation is extensive The Python community is quite vast, and thus input of different concepts is easily possible. The junior coders learn from the newly added code snippets, and the seniors in the field can use this to fuel creativity in their projects. Thus, a cohesive environment of developers leads to progress in software and web development. 9. Multi-tasking using Python This feature of Python is how it can benefit web development on all possible fronts. Python has several use cases - Data Science Data scientists have unique use of this feature. They can manipulate and extract data easily using different Python libraries, such as Pandas. Big and small data-driven organizations frequently use these libraries and create an expansive data bank. - AI applications Businesses have been adopting artificial intelligence, and a recorded rise of 270 percent was seen in four years from 2017. Advanced applications such as facial recognition software are developed and used rapidly across many industries. - Software development Python efficiently completes complex mathematical tasks, which the financial sector favors. Different software is developed that makes varied tasks easy for various sectors. - Web applications Python’s built-in frameworks like Django and Flask help developers build applications speedily. You must read these features more to choose the best one to suit the purpose. - Game development Tree-based algorithms can be effectively designed, and there’s an automatic feature for repetitive actions. Complicated game levels can be easily scripted using Python, and the texture can be enhanced even more. - Data security Python helps preserve information secrecy as it provides excellent website security. Big companies and businesses prefer this programming language and hire developers with similar backgrounds. Security breaches are rare, which is one of the significant benefits of using Python. The statistics of the programming language also prove its rising popularity. In 2021, a university study showed that 4 out of 5 aspiring web developers prefer Python as the primary programming language. It is currently the third most used programming language globally, and job seekers with coding knowledge about the same are selected. The abovementioned reasons are why Python is perfect for a web development project. Suppose you are working on a similar project now or will be working on one in the recent future. Businesses should hire a top python development company to develop extensive web applications. In that case, the development community recommends considering Python as the dominant programming language to achieve a smoother interface.
OPCFW_CODE
Transmission control protocol or internet protocol essay sample transmission control protocol or internet protocol - tcp / ip is a communication protocol that was developed to support different internetwork. Currently, about 2 4 billion people use the internet, yet there probably is only a small percentage who understands how the internet sends information or where the technology to send the data originated. Home essays transmission control transmission control protocol and data topics: transmission control protocol , internet protocol , user datagram protocol pages: 7 (1371 words) published: november 16, 2014. The transmission control protocol for the internet was created by american department of defense, under the body, advanced research projects agency network (arpanet) (leung & li, 2006. Tcp/ip stands for transmission control protocol/internet protocol and is a network protocol used on the internet, lans and wans the tcp/ip is a layered protocol hence it has five layers which are the physical, data link, network, transport, and application. Transmission control protocol/internet protocol was developed in the 60's as a method that connect large mainframes computers together for the simple purpose of sharing data or information in the present, most computer operating systems manufactures incorporate the tcp/ip suit into their softwar. protocol numbers protocol numbers are used to configure firewalls, routers, and proxy servers in internet protocol version 4 (ipv4, request for comments [rfc] 791i), the protocol number can be found in the protocol field of an ip headericmp echos are used mostly for troubleshooting. D1tcpip stands for transmission control protocolinternet essay stands for transmission control protocol/internet protocol which is a set of rules that define packets of information must reach the end user and if necessary it will resend the packets of information. List and describe the common protocols in the transmission control protocol/internet protocol (tcp/ip) protocol place your order now with reliablepaperscom and experience the difference of letting the professionals do the work for you. Tcp tcp (transmission control protocol) is one of the main protocols in tcp/ip networks whereas the ip protocol deals only with packets, tcp enables two hosts to establish a connection and exchange streams of data. Transmission control protocol is one most reliable ,connection oriented communication protocol used in the internet traffic the main aim of this section is to conduct a research on tcp friendly protocols and find a suitable answer to the questions like the features of tcp that are not suitable for. Transmission control protocol/ internet protocol operate within the osi model and divide's it into four subsections known as the tcp/ip layer osi layer (7) application, (6) presentation, and (5) session are utilized within the application layer of tcp/ip. Transmission control protocol/internet protocol essay network processes into seven different layers allowing for effective communication layer 7 or the application function provides network services to application processes, interacts and supports the networks needs of various software's, and provides protocols. The functions of transmission control protocol/internet protocol 440 words jan 31st, 2018 2 pages the tcp/ip is a layered protocol hence it has five layers which are the physical, data link, network, transport, and application. osi protocol hierarchy session layer the session layer is the fifth among the seven layers of the open system interconnections (osi) model it resides above the transport layer and below the presentation layer, and provides value added services to the underlying transport layer services. The transmission control protocol provides a communication service at an intermediate level between an application program and the internet protocol it provides host-to-host connectivity at the transport layer of the internet model. The foundation of the internet is based on the transmission control protocol/internet protocol (tcp/ip) networking protocol that serves to arbitrate control of the many connections that comprise the web, with design criterion specifically designed to avoid packet collisions and ensure the highest performance possible. Proposal to implement internet protocol version four (ipv4) essay - ipv4 the purpose of this paper is to present a proposal to implement internet protocol version four (ipv4) also known as transmission control protocol/internet protocol (tcp/ip) structure as our primary means of communication within our network infrastructure. Transmission control protocol/internet protocol (tcp/ip) is the language a computer uses to access the internet it consists of a suite of protocols designed to establish a network of networks to provide a host with access to the internet. 30 transmission control protocol/ internet protocol 31 the structure of tcp/ip tcp/ip protocols are based on a layered framework like the seven-layer osi reference model. Transmission control protocol ( tcp ) and internet protocol ( ip ) technically talking are two really distinguishable web protocols however since they are so normally used together, the term tcp/ip has been standardized to mention to both or either protocols. Most currently internet applications depend on the transmission control protocol (tcp) to deliver data reliably over the network however it was not part of its initial design, the most essential element of tcp is congestion control this defines tcp's performance characteristics. Transmission control protocol of the internet is a end-to-end connection oriented transport layer protocol which provides a reliable packet delivery over an unreliable network in spite of the efficient performance of the tcp, it becomes difficult to study and analyze the different versions of tcp because tcp itself was not able to provide the. The internet from its inception functions using transmission control protocol/internet protocol (tcp/ip) which is the basic communication language it understands tcp/ip while distributing packets to and from different devices on the internet also assigns addresses to the devices for easy identification. Essay instructions: consider internet protocol version 6 (ipv6) technology, one of the technologies listed on gartner's 2004 hype cycle that has high visibility today because new ipv4 addresses are nearly exhausted. D1tcpip stands for transmission control protocolinternet essay d1 tcp/ip stands for transmission control protocol/internet protocol which is a set of rules that define packets of information must reach the end user and if necessary it will resend the packets of information. Cloud essays is a database of high school, college, undergraduate and postgraduate essays and papers we offer both essays for sale and custom writing services on the request of the customers. Transmission control protocol over internet protocol the de facto standard ethernet protocols incorporated into 42bsd unixtcp/ip was developed by darpa for internetworking and encompasses both network layer and transport layer protocols.
OPCFW_CODE
https://awionline.org/compassion-index#/230 Once more. US Fish and Wildlife Service Plan Dooms Remaining Wild Red Wolves to Extinction https://act.alaskawild.org/sign/stopcubkilling/?t=3&akid=2046%2E141490%2E_CBziP Stop the killing of wolf pups and bear cubs on national preserves! https://www.facebook.com/groups/667522496706361/permalink/1097167237075216/ Take time to write public comments for Wildlife! Personalized public comments are an important tool for slowing the rollback of regulations or can be used in lawsuits to fight this https://www.facebook.com/angela.willmes Retweets HRes 401 https://www.hagamoseco.org/petitions/ayudemos-a-los-abandonados Argentina. San Nicolás. Diminish the number of stray dogs by sterilizing, providing adequate veterinary services and a public refuge, with professionals who collaborate without profit for the refuge. 3 more petitions will follow https://action.peta2.com/page/6199/action/1 Clemson: Your Balloons Could Kill Animals https://www.peta.de/obi If not signed yet. Stop animal cruelty at OBI, get animals out of the hardware stores https://trekdegrens.nl/ ABN AMRO, ING en Rabobank. Save the rainforest. Stop deforestation with our money! Scroll down please. * First, last name, email http://org.salsalabs.com/o/676/p/dia/action4/common/public/?action_KEY=23170&okay=true US-info. Protect our public lands from the #1 toxic polluter! Oppose the National Strategic and Critical Minerals Production Act in HR 5515 https://bit.ly/2t2suP8 Russia. We require Russia's acceptance (signing) of the European Convention for the Protection of lab animals used for experiments or other 'scientific' purposes! https://www.change.org/p/info-savoirdonner-com-pour-que-nala-ne-soit-par-rendue-a-ses-tortionnaires France. Do Not return Nala to his torturers! https://www.change.org/p/urgence-a-saint-maixent-l-ecole-79400-pour-les-chats-des-rues If not signed yet. France. St Maixent l’École. The city has become a real hell and a permanent danger for stray cats! https://www.change.org/p/luis-martinez-hervas-protecci%C3%B3n-animal-en-el-cpa-de-parla-ya Spain. Improve the conditions for the dogs in the Parla dog pound now! https://www.change.org/p/ilustre-ayuntamiento-de-puente-genil-c%C3%B3rdoba-creaci%C3%B3n-de-un-grupo-de-trabajo-de-bienestar-animal-en-el-ilustre-ayunt-de-puente-genil Spain. Puente Genil (Córdoba). We want to create a working group within the City Council to support the acts that lead to Animal Welfare and Protection in our town https://www.change.org/p/ayuntamiento-de-puente-genil-proteccion-animal-en-puente-genil-con-protectoras-y-no-con-perreras As well https://www.change.org/p/noelia-posse-los-animales-de-m%C3%B3stoles-mas-abandonados-que-nunca Spain. The Animals of Móstoles, more abandoned than ever! No to the massification of the Animal Collection Center; accept a protocol so that several associations can cooperate! https://www.change.org/p/c%C3%A2mara-municipal-de-jarinu-prefeitura-municipal-de-jarinu-castra-m%C3%B3vel-para-jarinu Requesting a Mobile Castration Unit for Jarinu SP, Brasil https://bit.ly/2zyl1Nf Russia. Vladivostok. Take action against such policemen! They just shot the dog next to his sleeping owner https://www.change.org/p/probosque-evitemos-la-tala-de-50-mil-%C3%A1rboles-en-las-pe%C3%B1as-jilotepec-estado-de-m%C3%A9xico Avoid logging 50 thousand trees in the Peñas Jilotepec, State of Mexico! https://www.change.org/p/gobierno-nacional-de-colombia-ministerio-de-medio-ambiente-ecopetrol-ecopetrol-gobierno-paren-ya-el-derrame-en-barranca-y-reparen-ambientalmente-la-zona Colombia. Stop the oil-spill caused by Ecopetrol at the Barrancabermeja and clean up ! https://www.thepetitionsite.com/takeaction/848/870/142/ Justice for Katy P. — Find the Monster Who Lit a Firecracker in Cat's Rectum ========== News and more ========= http://m.china.com.cn/node/20180119/index.htm China. First click on the Upper Box at your RIGHT! Vote for Nr 1 and now Nr 2 ! ( Nr. 1: NPC deputies suggested that legislation strictly defines and punishes the abuse of animals. Now circa 1241196 votes. Nr. 2: NPC deputy Zhu Lieyu: The abuse of animals has become increasingly fierce. Suggested punishment. Circa 1229204 votes now. Every 24 hours------- !!!!!!!!!!!!!!!!!!!!!!!! See below: http://m.china.com.cn/plug/20170228/index.htm CHANGED! PLEASE VOTE FOR NR 2, Second page, NOW !!! China. Takes a while. Drag it up while clicking on the red hand or grey lining, to the Second page! Vote for Nr. 2 ! , now circa 2427229 Zhu Lieyu: Maltreatment of animals recommended for penalties for public security administration . Every 24 hours -------------- https://mygivingcircle.org/search/hunter+valley+brumby+association/ Vote please, once a week https://www.express.co.uk/news/nature/970009/Fur-free-Britain-MPs-vote-PETA-animal-rights-Government Britain will not ban fur say MP’s https://www.facebook.com/groups/634849086706350/permalink/856253241232599/ https://www.bbc.com/news/world-us-canada-44775113 Trump pardons Oregon ranchers who sparked 2016 militia standoff
OPCFW_CODE
Serverless Company — Lambda Business As an enterprise strategy and business development consultant, I like to stress my limits from time to time by entering unknown fields. My current playground is amazon's AWS Cloud Services offering, which I am desperately trying to embrace. Great experience and fruitless exercise combined, since the AWS services universe expands faster than I can deep-dive in all the fascinating technology details of this setup. What I find remarkable though is how Jeff Bezos' leadership principles and customer-oriented service designs shape the AWS technology. A portfolio of this size, managed in such a consistent and well-knit manner is rarely to find among the enterprise suites we all know. This brings me to the idea of reenginering businesses from that technology perspective. I can only recommend the AWS Well-Architected Framework White Paper which explains the five constitutional pillars of the AWS Cloud technology: Operational Excellence, Security, Reliability, Performance Efficiency and Cost Optimization. Mix together with Amazon's 14 leadership principles, add some lean and agile method glue, and voilà - here's the digital business framework we all hope to reach with our own legacy-burdenend companies from the past. Is the framework applicable to existing companies? You bet. All at once? For sure not. We didn't forget Hammer's and Champy's revolution from the late 80s, which brought good insights about lean and process but got stuck in the end since Goliaths are hard to kill. The beauty of service-oriented concepts is that you can cut first, and optimize later. That principle, discovered at the CORBA age, has now come to a technical ("Cloud") and procedural ("Devops") maturity that "Everything as a Service" has become possible with super-lightning speed and unpredecented quality, if architected in a cloud-compatible way. To stay in the picture - most companies try to be EC2 businesses when converting to digital. "Go Cloud But Stay as You Were" seems to be the theme. Digitized product offerings, Omni-channel marketing, and web-based service tickets hide more often than not a legacy organization kernel that remains unchanged and creates stress between centrifugal forces at the perimeter and the immovable core. A serverless sollution I think we can do more than that. What if we as a business would provide services that are "serverless" in a way that you just connect, consume the service, pay as you go and leave the rest to the provider? "Easy" you might say, "are not all the retail marketplaces and app stores part of that business?" True, and we can see the success of the big platforms, led by Amazon in the consumer space. But I think it's time to really create Everything as a Service - IT Solutions, Enterprise Consulting, Education, Complaint Management, and all Operations Functions that make it so hard to build a consistend customer experience these days. ISR, the company I work for, hast just spawned off the Buildsimple brand that has exactly that goal - build products that allow your business to digitize your operations quickly. We are quite in an early stage but first services can be obtained as needed via the AWS Marketplace as SaaS offering. Stay tuned for more!
OPCFW_CODE
The classes of methods have no private specification document. The specifications are contained in the specification of the module or the interface that contains the method. The information contained in the public part of a method suffices to generate a skeleton implementation of the method. The public part of a method contains its signature. The signature consists of: · A unique method name · The type of the method result · An ordered set of parameters. Each parameter has o A name that is unique within the realm of the method o A type o A placeholder for the value of the parameter In addition, private methods feature the following assets · A DiscosedAlias · A Boolean asset HideFromPublication The DisclosedAlias is a name that does not disclose the nature of the method. It is used in specifications of local client methods and in method disclosures. The default value for HideFromPublication is true. If the value of HideFromPublication is true, then the specification of the private method is not included in the specification of the containing element. The public part may also include: · An optional short description · Optionally, the name of the method result · A Boolean stating that this method is a class-wide method · A Boolean stating that the body of the routine must be included in a critical section · A Boolean indicating whether a result section must be inserted in the last part of the implementation of the skeleton of the method · A Boolean indicating that this (internal) method must be used inline inside other methods that make use of this method · A Boolean indicating that no name mingling must be applied to its name The default values of these Booleans are false. The hidden part consists of source code and after compilation it consists of the binary implementation of the method. This hidden part possesses a Boolean asset HideFromPublication. The default value of this Boolean is true. However, if the value of HideFromPublication is false, then the hidden part is published. The hidden source code part consists of the code lines that must be inserted inside the skeleton of the method. These code lines may be read by the design tool and reinserted after the skeletons are recreated. In order to support reinsertion, the skeleton must contain comment lines that act as placeholders. In this way, the design tool can maintain inserted code while the architecture is changed. The code lines consist of specifications of local assets, operations, block statements and navigation statements. The operations represent normal method calls, methods that are implemented by the program language or methods that are introduced programmatically via operator overloading. Block statements define lower lever encapsulation regions. If not indicated explicitly otherwise, the assets specified within a block are not accessible outside that block. The hidden binary part is always part of a larger piece of binary code. This can be a function library, a class library, a load library that contains a package of modules or it is an executable. The configuration tool may support several methods to preserve inserted code. First, the inserted code may be specified with the public part of the specification of the method. Second, the tool may retrieve the inserted code from the existing code files. During regeneration of the skeleton, the code is reinserted. Third, the tool may store the retrieved code and reinsert the code on request when the skeleton is generated. These actions are language specific. Thus, C++ source code is not inserted in a skeleton that is meant for C source code.
OPCFW_CODE
"Incompatibility" issue when trying to run a script Hi @mpetuska! I'm giving ktx a try by running: > ktx install https://raw.githubusercontent.com/krzema12/PersonalConfigs/master/scripts/removeLocalMergedBranches.main.kts --alias=remove-merged-branches It's a script I've been using for a year now. This installation completes with success - nothing on stdout, 0 exit code. Checking ~/.ktx dir: > ll ~/.ktx/bin total 0 lrwxr-xr-x 1 piotr staff 60B Jan 19 09:13 remove-merged-branches@ -> /Users/piotr/.ktx/scripts/removeLocalMergedBranches.main.kts > head /Users/piotr/.ktx/scripts/removeLocalMergedBranches.main.kts #!/usr/bin/env -S ktx execute @file:DependsOn("org.eclipse.jgit:org.eclipse.jgit:<IP_ADDRESS>612231935-r") import org.eclipse.jgit.api.Git import org.eclipse.jgit.api.ListBranchCommand import java.io.File import kotlin.system.exitProcess fun red(text: String) = "\u001B[31m$text\u001B[0m" Everything looks as expected. I'd expect that calling remove-merged-branches runs the script, but it doesn't work: -bash: remove-merged-branches: command not found Let's try to call the symlinked script: > ~/.ktx/bin/remove-merged-branches Removing /Users/piotr/.ktx/bin/remove-merged-branches due to incompatibility with<EMAIL_ADDRESS>Please reinstall it. Usage: ktx execute [OPTIONS] TARGET [ARGS]... Error: Invalid value for "TARGET": File "/Users/piotr/.ktx/bin/remove-merged-branches" does not exist. Also, the symlink disappeared: > ll ~/.ktx/bin total 0 I'm on MacBook Pro M2, Ventura 13.1, bash 3.2.57(1)-release. Can you try uninstalling ktx, installing the latest version and rebooting your device? 0.1.0 had some issues with migrations I'm affraid... Unfortunately, it keeps happening. Is there a way I provide some logs to help you investigate? No logging just yet I'm affraid, but I can reproduce it fairly easily. As a temp fix you could try creating ~/.ktx/.version file and putting 0.1.1 in there. That should prevent those false-positivevmigrations. Something's still off: > ll ~/.ktx/bin/ total 0 > ktx install https://raw.githubusercontent.com/krzema12/PersonalConfigs/master/scripts/removeLocalMergedBranches.main.kts --alias=remove-merged-branches > echo "0.1.1" > ~/.ktx/.version > ~/.ktx/bin/remove-merged-branches Removing /Users/piotr/.ktx/bin/remove-merged-branches due to incompatibility with<EMAIL_ADDRESS>Please reinstall it. Usage: ktx execute [OPTIONS] TARGET [ARGS]... Error: Invalid value for "TARGET": File "/Users/piotr/.ktx/bin/remove-merged-branches" does not exist. By the way, are you installing via script or sdkman? SDKMAN. 0.1.2 is out for you to try. Works! :tada: Thanks!
GITHUB_ARCHIVE
How to Internationalize iOS Apps – A Quick Guide I find my myself routinely surprised about how little information is available on topics that would seem to me quite obvious. Today I decided to try to internationalize app I’ve built – create some locale specific date support and foreign language sets. I figured this would be pretty simple…d’oh. A quick peek on developer.apple.com yields a few docs however many of them are still at Xcode 3.0 and its quite obvious that the menus and commands are all different now. After quite a bit of piking around I finally found something useful: Many of the docs linked form this one are out of date, but this is the best source of info on developer.apple.com. You can google around and find links to tutorials, however most of them are also way out of date and almost none of them bother to say what version of iOS and Xcode are being used. So for clarity I will say: I’m working with iOS 6.1 and Xcode 4.6. Less impt. – I’m running on OS-X 10.8.2 as well. Where to Start? I’ve got quite a bit of experience with this stuff from web programming. If you do not have a background in this area then the best place to start is the above linked library article. It provides a decent background on all of this. GNU GetText is another great place to look at for general information on i18n and l10n in software systems. As you read thru the Apple docs you’ll quickly see that they are written for Xcode 3.0 and all the menus are different now. I skimmed over all of this, but then I found it easier to “go to the source”. Grab the International Mountains sample app and open that up in Xcode. Check out the directory structure of the project. This app has been setup for Chinese, English, and French – you can see the separate directory structures (ending in .lproj) and their contents. Seems pretty obvious. On a per language basis there are copies of XIB files, string files and then also plist files because there are strings used in things like the icon name for your app.Note that iOS apparently cannot deal with dialects of a language like French vs. Canadian French – OS-X can so I’m not sure what happens with that Chinese sample…maybe this has changed since the Apple docs were written. It is a bit concerning that the xib files themselves are placed in here. Any changes to the app’s screens will have to be replicated across all the files I suppose. We’ll address some of that later in the helpers and tools section of this post. Open up Xcode and take a look at this project. Open up the resources group and notice that things are nicely organized for you. At least that is good! Click on a xib to open it in the IB tool and then click on the Identity tab to see the configuration. Notice that this app was built back in the stone age for iOS3. I updated all of these to match my config and confirmed I can still run the app in my simulator. You have to change each one individually which is a pain. You should also update all the other spots in build settings likewise. Adding International Aspects to My App So now that we have a model app, albeit a very ancient one, we can take a look at how to do this on one of my own projects. I have a project that uses both xib files and where I have hand coded a lot of modal screens and panels. I’ll need to use both IB and some command line tools to fully extract all my strings. Open up a xib file and click the identity tab in the assistant. About 2/3 down you’ll see a “localize” button. Use that and Xcode will do some “stuff”. If you look in the file system you’ll see that Xcode has moved the file to the en.lproj file and created a plist file. Unfortunately you have to manually keep your project tidy in Xcode itself. If you want to make another language it does not seem like there is a way to do this from IB. What I did was copy the en.lproj folder and then went back to my project, selected the xib file, and added the new xib file to the project. After doing that – Xode restructured the project view and IB location menu to indicate that there is now an english and french localization. I did the same thing with the InfoPlist.strings files. Not exactly intuitive…maybe there is a better way that I’m missing. Next you will need to create the Localizable.strings files and add them to your project. To add the first one, select your Resources group and then do a File -> New -> Resource ->Strings File. You should add this one to you en.lproj folder and make sure you name it Localizable.strings. To add the next one I had to do it manually because Xcode would crash. Go to finder and copy and paste the file you just created into the fr.lproj folder. Then add this to the Resource folder manually – voila. My project structure now looks like this. So great – I have separate copies of all my xib files. Now the chore would be to go thru them updating the text with the translation. Sounds painful and it is. The approach does have one benefit – any slight changes for spacing and sizing can be accommodated. There are also scattered strategies for how to automate this and that part of it. The most obvious one to me would be to use placeholders strings in the XIB, but replace them all in the onViewLoad() or viewDidLoad() method, but this would mean I need to setup connections and properties for all the labels. Lots of work… Once that is set the next step is to go thru all my .m files and replace all the hard coded strings with a call to NSLocalizedString(@”StringKey”, @”StringKey comment”); . That also sounds like a lot of work! One you have gone thru the app and done this, there is a tool that you can run against your .m files to extract the keys and comments to your .strings file. Then you can complete your strings files with the proper values using a format like this: /* Title used for the Navigation Controller for the detail view */ “detailViewNavTitle” = “Mountain Detail”; This is straight from the apple example. If you look in the RootViewController.m file you will see that they are in fact doing what I suggested – using the viewDidLoad() method to replace strings in the XIB file. Well that is a lot of work. I’m going to now actually do this in one of my projects and see how all this theory works out. I’ll post back an update here with my results and any changes in approaches that I come up with.
OPCFW_CODE
Allow users to suggest edits to the duplicate list Sometimes I find a question which is closed as a duplicate, but I know a better duplicate to that question. When I find a question which isn't closed for which I know is a duplicate, I can vote or flag to close it as a duplicate, possibly suggesting a different target so both show up. But when the question is already closed as a duplicate, there isn't really any satisfactory way to suggest another target. Here are the options I have, which aren't really satisfactory: Flag for moderator attention so that a moderator can edit the duplicate list This is an option, but I'm not sure a custom flag would be appropriate for something that minor. In any case, it would be better if the community could handle it like they do for questions that aren't closed as duplicates yet. If a user with a gold tag badge or a moderator closed the question as a duplicate, ping them in a comment to suggest the other duplicate This could work if a user with a gold tag badge or a moderator closed the question as a duplicate, but that's not always the case, so this option isn't always available. Per the FAQ, only those who cast binding close votes can be pinged in comments. Ping a gold-badge holder in chat While this doesn't require a moderator to intervene, not many gold-badge users are active in the site's chat rooms, those who are may not be active in chat at the time the request is posted, causing the request to go unnoticed, and some users prefer not to receive pings regarding duplicate closures. Link to the duplicate in a comment The problem with this is that comments aren't as visible as the duplicate banner, so future users with the same problem aren't really likely to find the target I suggested which has a better answer to their question. This is especially true if the question already has a lot of comments. Answer the duplicate target with a similar answer to the answer to the duplicate question This could be a good option in some cases, but in other cases the answer to the duplicate question isn't an answer to the duplicate target. Flag/vote to close the current duplicate target as a duplicate of the question I found Again, this could work in some cases, but just because there is a third question which is a duplicate of both questions doesn't mean the two questions are necessarily duplicates of each other. In other words, A being a duplicate of B and C doesn't necessarily mean that B is a duplicate of C. Moderators and gold tag badge holders don't have this problem since they can edit the duplicate list, but there is currently no satisfactory way for normal users to suggest another duplicate of a question which is already closed as a duplicate. I suggest making it possible for users (either any user or users with a certain amount of reputation) to suggest edits to the duplicate list. Just like users who don't have the privilege to edit a post on their own can suggest an edit to that post, users who don't have the privilege to edit the duplicate list on their own should be able to suggest edits to the duplicate list. I suggest that those suggestions could be treated as suggested edits and be added to the suggested edits queue. Two votes to approve the suggestion would result in the suggestion being approved and two votes to reject would result in the suggestion being rejected. Moderators and gold tag badge holders would have binding votes, and they would also have the possibility to improve or reject and edit. Reviewing such suggestions could be limited only to certain users in the same way as suggested edits to tag wikis can only be reviewed by 5k users. I suggest requiring 3k reputation to review those suggestions because such users have the privilege to vote to close questions as duplicates, but the required reputation to review those suggestions could be increased if necessary. If you have any other suggestions on who should be able to review suggested edits to the duplicate list and how the suggestion should be presented to reviewers, feel free to post an answer. Well, I do support this, but to be honest can't see this happening. The "edit duplicates list" was added as some kind of bonus already, hard to believe they'll spend even more time on this. Pinging gold hammer with a suggestion is good enough for the rare cases where it's needed. It's easiest to simply use a comment to suggest a similar or duplicate question; the proposed question will appear in the right column under the "Linked" list - see: ➚
STACK_EXCHANGE
As most of you are aware, earlier this year we officially released Honeybee[+]. In that post we mentioned a comprehensive tutorial for Matrix-based daylighting simulation with Radiance will be released by LBNL in a near future. We’re excited to finally share the good news that the report is officially released and can be downloaded from Radiance-online tutorials page. See the pdf file for “Daylighting Simulations with Radiance using Matrix-based Methods“ and the examples for the tutorial under Advanced Tutorials section. At present, Honeybee[+] links advanced daylighting simulation techniques like the Daylight Coefficient Method, Three-Phase Method and Five-Phase Method to parametric interfaces such as Grasshopper and Dynamo. These advanced methods are implemented in Radiance, a thoroughly validated command-line-based lighting-simulation software that has been under continuous development at the Lawrence Berkeley National Laboratory since the 1980s. Radiance also forms the basis for daylighting simulations in Honeybee (legacy version) and most of the annual daylighting simulation software that are currently used in the industry. While the accuracy and features are widely acknowledged both in industry and academia, gaining an in-depth understanding of Radiance has been, and will always be, a challenge. Aside from being a command-line based-tool that was originally meant for Unix operating systems, Radiance isn’t also just a single software. It is actually a collection of over 100 programs where each command has several inputs. For instance, here is one of the “simplest” workflows for an annual daylighting simulation in Radiance: // consolidate geometry for efficient ray tracing. oconv materials.rad room.rad objects/Glazing.rad > octrees/roomDC.oct // perform raytracing. rfluxmtx -I+ -y 100 -lw 0.0001 -ab 5 -ad 10000 -n 16 - skyDomes/skyglow.rad -i octrees/roomDC.oct < points.txt > matrices/dc/illum.mtx // Generate series of climate-based skies from weather data epw2wea assets/NYC.epw assets/NYC.wea gendaymtx -m 1 assets/NYC.wea > skyVectors/NYC.smx // Generate results through a series of matrix multiplications and scalar operations. dctimestep matrices/dc/illum.mtx skyVectors/NYC.smx | rmtxop -fa -t -c 47.4 119.9 11.6 - > results/dcDDS/dc/annualR.ill In Honeybee and Honeybee[+] we shield some of this complexity from the user through a combination of Python and Dynamo/Grasshopper plugins. Nevertheless, a better understanding of the underlying logic and syntax of Radiance will enable the users to perform such simulations with better efficiency and accuracy. The recently released Daylighting tutorial by LBNL, titled “Daylighting Simulations with Radiance using Matrix-based Methods“ is meant to provide a better understanding of Radiance. This tutorial, which covers nearly three decades of scientific research and software development on daylighting simulations, is a resource for understanding both the rationale and methodology of all the matrix-based simulation methods that are currently possible with Radiance. More specifically, this document covers both grid and image-based simulations using the 2-Phase, 3-Phase, 4-Phase, 5-Phase and 6-Phase methods. Officially, this tutorial supersedes all the prior annual daylighting tutorials issued by LBNL (https://www.radiance-online.org/learning/tutorials). For all those members of the Ladybug Tools community who are interested in gaining an in-depth understanding daylighting simulation methods implemented in Honeybee and Honeybee[+], we recommend this tutorial as the comprehensive resource to do so. The official announcement for this tutorial by LBNL can be found here: (https://radiance-online.org/pipermail/radiance-general/2017-October/012281.html) The download link for the tutorial and it’s exercise files, can be found from the official Radiance website at: (https://www.radiance-online.org/learning/tutorials/). See under the advanced tutorials section. P.S. The author of the tutorial asked me not to mention his name in this post, and I didn’t, but you know who he is! We owe him a big debt of gratitude for this very well-done tutorial and all his work for Honeybee[+].
OPCFW_CODE
Continued from page 1 This arangement of directories and sub-directories will provide good file organization for example website. Understanding my reasoning for this directory structure should help you to design a directory structure for website you have in mind. Default Page Configuration Every website has at least one default webpage configured (also called "home" page). The default webpage is webpage that is returned when user enters or clicks on a link containing only domain name, without a specific file name. On a Unix or Linux web server, default webpage will usually be "index.htm". On a Windows web server (IIS), default page will usually be "default.asp". The website administrator, or if your webhost provides required "control panel" feature, you can actually configure any page to be default page. If your web server has more than one default page configured, I would recommend removing all but default page that you intend to use. Now, let's assume that all of your webpages need to link to an image file named "logo.gif" stored in "common" folder. The relative link on your default webpage would be as shown below. The website file manager interprets this as "look in folder named common for file named logo.gif". However, link on any webpage contained in one of sub-directories would be as shown below. The website file manager interprets this as "go up one level, then look down in folder named common for file named logo.gif". This difference in link may not be a problem unless you use SSI or ASP (Active Server Pages) to build your webpages from a common header file and a common footer file. Then you need a different link in common file depending upon whether page linked to common file is default webpage (where you would use common/filename) or a webpage contained in a sub-directory (where you would use ../common/filename). There are several ways to solve this problem. 1. If your website has a server-side scripting engine like ASP or PHP and you know how to program, you could implement code that selects proper link. 2. You could use complete path, including domain name, on all pages. This will cause problems if you ever have to move your website to a different web host (Until all dns servers across planet have been updated). 3. You could put your home page in a sub-directory, for example "common", and make your default page into a re-direct to your home page. Then you would use "../common/filename" for all links. The following meta tag, placed head section of your default webpage, will immediately redirect users browser to your real home page. meta http-equiv="refresh" content="0,url= "http://yourdomain.com/common/homepage.htm" In this article, I showed you how to design a directory structure for your website and how to create relative links to link all your webpages together to form a website. Website visitors don't like to do a lot of scrolling, so try to keep your webpages to only two or three screens high. Please, no more websites that consist of only one mile long webpage! ---------------------------------------------------------- Resource Box: Copyright(C) Bucaro TecHelp. To learn how to maintain your computer and use it more effectively to design a Web site and make money on Web visit bucarotechelp.com To subscribe to Bucaro TecHelp Newsletter visit http://bucarotechelp.com/search/000800.asp ---------------------------------------------------------- To learn how to maintain your computer and use it more effectively to design a Web site and make money on the Web visit bucarotechelp.com To subscribe to Bucaro TecHelp Newsletter visit http://bucarotechelp.com/search/000800.asp
OPCFW_CODE
ERROR 2006 (HY000) at line MySQL server has gone away Problem I encountered this error during a Mysql DB dump and restore. None of the solutions posted anywhere solved my problem, so I thought I post my own answer I found on my own for posterity. Source Env: CentOS 4 i386 ext3, Mysql 5.5 dump, Most tables engines are MySIAM, with a few InnoDBs. Destination Env: CentOS 6 x66_64 XFS, Mysql 5.6 Source DB is 25GB on disk, and a gzipped dump is 4.5GB. Dump Dump command from source -> destination was run like so: mysqldump $DB_NAME | gzip -c | sudo ssh $USER@$IP_ADDRESS 'cat > /$PATH/$DB_NAME-`date +%d-%b-%Y`.gz' This makes the dump, gzips on the fly, and writes it over SSH to the source. You don't have to do it this way, but it is convenient. Import On the new source DB I ran the import like so: gunzip < /$PATH/$DB_NAME.gz | mysql -u root $DB_NAME Note that you have to issue CREATE DATABASE DB_NAME SQL to make the new empty detination DB before starting the import. Everytime I tried this I got this type of error: ERROR 2006 (HY000) at line MySQL server has gone away Source DB conf My source DB is a virt server using VMWare so I can resize the RAM/CPU as needed. For this project I temporarily scaled up to 8CPU/16GB of RAM, and then scaled back down after the import. This is a luxury I had, that you may not. With so much RAM I was able to tune the heck out of the /etc/my.cnf file. Everyone else had suggested increasing max-allowed-packet bulk_insert_buffer_size To double or triple default values. This didn't fix it for me. Then I tried increasing timeouts after reading more online. interactive_timeout wait_timeout net_read_timeout net_write_timeout connect_timeout I did this and it still didn't work. So then I went crazy and set everything unreasonably high. Here is what I ended up with: key_buffer_size=512M table_cache=2G sort_buffer_size=512M max-allowed-packet=2G bulk_insert_buffer_size=2G innodb_flush_log_at_trx_commit = 0 net_buffer_length=1000000 innodb_buffer_pool_size=3G innodb_file_per_table interactive_timeout=600 wait_timeout=600 net_read_timeout=300 net_write_timeout=300 connect_timeout=300 Still no luck. I felt deflated. Then I noticed that the import kept failing at the same spot. So I reviewed the SQL. I noticed nothing strange. Nothing in the log files either. Solution There's something about the DB structure that's causing the import to fail. I suspect it's size related, but who knows. To fix it I started splitting the dumps up into smaller chunks. The source DB has about 75 tables. So I made 3 dumps with approx 25 each. You just have to pass the table names to the dump command. For ex: mysqldump $DB_NAME $TABLE1> $TABLE2....$TABLE25 | gzip -c | sudo ssh $USER@$IP_ADDRESS 'cat > /$PATH/$DB_NAME-TABLES1-25`date +%d-%b-%Y`.gz' Then I simply imported each chunk independently on the destination. Finally, no errors. Hopefully this is useful to someone else. The answer to this question was to split the dump into chunks by tables. Then do multiple imports. See details in the original post.
STACK_EXCHANGE
So Dropbox have unleashed Dropbox Pro, offering a 1TB plan for AUD$10.99/month. I was a paying customer with Dropbox a couple of times in the past, but I always dropped it because it simply didn’t offer enough storage space for what I was paying, and I wasn’t keen on the idea of paying more than $10/month for a level of cloud storage that I needed. Storage should be cheap. Now because of this 1TB plan I figured I might try to simplify my backup strategy even more, and try to get some Syncronisation/Automation happening with it, so I forked over some cash and have come up with the following. I’ve been lugging around external drives as my main hub of data for a while now (here’s a post about it), but one of the big issues I’ve always had is the fact I suck at keeping the information in sync consistently. Even putting a calendar entry in to remind me doesn’t work. If my routine is disrupted, then it’s quickly forgotten. So now I’ve decided to replace my 1TB External drives with a 1TB Dropbox account. When I upgraded my Dropbox account and started to load it with data, I realised that I hadn’t considered that the Dropbox folder is in my home folder by default. I quickly filled up the SSD that my OS runs on. Woops. So to get the space I needed I chose to move the Dropbox folder onto a 1TB external drive for my main computer (as I don’t have 1TB of free space on any internal drives), which I’ll leave plugged into the computer at all times. I’ve encrypted the drive as well so it’s no concern if somebody takes it. If the drive is disconnected, then Dropbox will kill itself until you put the drive back. This diagram below pretty much sums up my new strategy. Each machine I use will have Dropbox installed on it, and they’ll all push their changes up to the cloud, solving my sync problem. The little green house icons are CrashPlan, which I use as a secondary backup service for data that can’t go into Dropbox (like my home folders, or gigantic files not often accessed). I may ditch CrashPlan though if I can’t find any value in it in the future. I can use Selective Sync with Dropbox to sync only the necessary data to my machines that don’t have as much space, like my Mac or my Work PC. Now I think you’d be insane to put everything into Dropbox. The employees of the company can access the information within if they wish, or they can hand it over to authorities if requested. And the service is a US-based company, so even being in Australia my data is still under US jurisdiction (even if the data is sitting on an Australian server). Until they adopt some form of Zero-Knowledge capability, they cannot be trusted completely. It’s not a tinfoil hat kind of fear I have, it’s just a fear of the imperfections of humans. A simple oversight is all it takes to expose your data, and I don’t want my financial information or emails being leaked like that. So to get my Private information into my Dropbox to still reap the benefits of the cloud, I installed Boxcryptor, which is a sweet little tool that encrypts files in your Dropbox before they’re sent up to Dropbox. It’s like adding a Zero-Knowledge layer on top of the service. Boxcryptor basically acts like EFS on Windows (the installer actually recommends to disable EFS so there’s no confusion), and to access your data you need to go through a virtual drive that it mounts to your computer. I’ve tried to run it on my Mac to see what the experience is like on there, but it won’t currently run on the Yosemite beta. If you sign out of the application, then the virtual drive is simply disconnected. Here’s the configuration changes I made to boxcryptor to help strengthen it: - Don’t remember password. I think it’s safest to require intervention at boot for the decryption of your files, in case somebody gets a hold of your PC and you don’t have disk-level encryption enabled, or they can bypass authentication to the machine. - Enable Filename Encryption. Optional, but I think it’s safer to do this. A lot can be inferred by the names of files. This does require the paid version though. - Disable Start with Windows. You should only use the application when you need it, otherwise the convenience will expose you to more risk. I have evaluated most of the cloud storage providers, and the one thing that I have come to care most about of them is stability. I want a service provider that I know will be there in 5 years time. - SpiderOak is a few years old, but I still find some latency in the product’s ability to keep in sync across machines, meaning I can’t trust that some file I put on one machine will make it across to the others in a timely manner. - Mega was promising with its 50GB of free storage and supposed zero-knowledge policy, but it’s really hard to trust that a venture by Kim Dotcom now won’t have the plug pulled on it. - OneDrive, Google Drive and iCloud all have very pricy options for 1TB of storage, and they’re tied to one platform. I guess I like Dropbox for being a standalone player in the market. They essentially pioneered the consumer cloud storage industry and they’ve still maintained their platform independence. As I mentioned before, they can’t be trusted with anything really important as they’re not zero-knowledge, but they can be trusted to keep the service running and to always be neutral to platform support. I think if they were taken over by Microsoft or Apple or another one of companies that like vendor lock-in for their customers, then I’d probably start looking for an alternative again.
OPCFW_CODE
Touted as one of the perfect platform used by the largest data science community, Kaggle offers a unique peek into the data science industry. The survey by Kaggle covers significant areas in data science, such as: · Programming languages · Machine learning algorithms · Diversity, salary, and education The latest survey, 2020, included nearly 24,000 users from across the globe, providing information regarding their opinions, behavior, and demographics. Every year Kaggle conducts a survey to explore topics and trends that matters the most to specific groups. Since we’re already at the beginning of another year, we will specifically discuss the survey from previous years — 2017, 2018, 2019, and 2020. You would be amazed to see the outcome of the survey. With new techniques and algorithms accelerating at a breakneck speed, the survey showcases whether new techniques will continue to replace the old techniques or perhaps be a part of the existing technique. · Ever since the surveys started taking place, the number of respondents has been at an all-time high. The survey ensured to maintain participants between 17,000 to 24,000 every year. · Out of which, nearly 2,400 to 4,100 respondents identified had “data scientist” as their job title. Besides data scientists, we could also see other job titles who responded to the survey such as “data analysts” and “business analysts” — however, both titles have been amalgamated into one category. For various reasons, job titles such as machine learning engineer have appeared only in the 2017 and 2020 survey. Therefore, you won’t be seeing it in the other years (2018 and 2019). The survey helps users analyze trends and technologies in the data science and big data analytics industry. More so, such surveys can help aspiring data science professionals, machine learning engineers or big data analysts better understand the trends. Let us further delve deeper and briefly talk about the significant areas covered in these surveys. Most often, a big data analyst stays confused in deciding which programming language to learn to get into data science. Going by the survey, most of the data scientists preferred using Python. As a result of the survey, more than 78 percent of the data scientist, machine learning engineers, and software engineers reported they were comfortable with Python. While even for business and data analysts, the usage of Python consistently grew from 61 percent to 87 percent. Another programming language that was most preferred by data scientists was R. However, the percentage of data scientists using R dropped by more than 33 percentage points ranging from 64 percent to 23 percent. Overall, Python consistently continued to be the most preferred programming language by data scientists, machine learning engineers, business analysts, and data analysts over the past four years. Data science techniques involving data analysis and predictions are the heart of data science. Within these four years, questions were raised about the type of general techniques used and the time-line regarding data science workflows. However, one specific question appeared in three surveys out of the four — “what are the types of machine learning algorithms used”? Tech professionals looking to make their way into the data science industry need to know the type of machine learning algorithms used in data science techniques. The most common algorithms include: · Linear Regression · Logistic Regression · Decision Trees · Gradient Boosting · Neural Networks Another category of machine learning algorithms missing in the image includes supervised and unsupervised machine learning — clustering and dimensionality reduction. As a result, most surveyed data science professionals were males. Although, there has been a significant improvement over the past years in the percentage of non-male professionals in the data science realm. However, the male counterpart in the data science field was still high (over 80 percent). Over the years, even the salary compensation for data scientists increased except for job titles such as software engineers. While according to the demographics, candidates with neither a Ph.D. nor a Master’s degree showed slight growth from 27 percent to 32 percent. This may be due to the constant proliferation of online MOOCs, online education programs, and online data science certification programs. Current organizations are now more focused on candidates having practical skills and not just theoretical knowledge. These are now easily achievable by obtaining certification programs that offer projects and real-world problems to solve. Perhaps, you’ll need to wait one more year to check the latest data science trends that took place in 2021. We hope Kaggle would continue with the survey in the coming years.
OPCFW_CODE
Docker Spark Image With Code Examples We’ll attempt to use programming in this lesson to solve the Docker Spark Image puzzle. This is demonstrated in the code below. Hello, you can use this image to practise or try things in safer env with jupyter notebook: https://hub.docker.com/r/jupyter/pyspark-notebook We were able to fix the Docker Spark Image problemcode by looking at a number of different examples. Can I run spark in a Docker container? Only Spark executors will run in Docker containers. In the "classic" distributed application YARN cluster mode, a user submits a Spark job to be executed, which is scheduled and executed by YARN. The ApplicationMaster hosts the Spark driver, which is launched on the cluster in a Docker container. Can Spark be containerized? Consider two recent trends in application development: more and more applications are taking advantage of architectures involving containerized microservices in order to enable improved elasticity, fault-tolerance, and scalability — whether in the public cloud or on-premise. How do I run all sparks on my laptop? - 1.Run a container. docker run -it –rm -p 8888:8888 jupyter/pyspark-notebook. Run a container to start a Jypyter notebook server. - Connect to a Jupyter notebook. # Copy/paste this URL into your browser (if the first) - 3.Try to run a sample code. What is docker the spark for the container revolution? Show More. Docker is a software platform for building applications based on containers—small and lightweight execution environments that make shared use of the operating system kernel but otherwise run in isolation from one another.02-Aug-2021 How do I create a docker image for PySpark? you need perform only three steps: - Install Docker desktop in you computer. - Select your Custom Pyspark runtime container image that you want to run. - Copy the url of the terminal, for example my case was http://127.0.0.1:8888/?token=9441629356952805506d51f2798407db534626989f4a4363 and paste in your browser. How do you run Spark on Kubernetes? - Create a Kubernetes cluster. - Define your desired node pools based on your workloads requirements. - Tighten security based on your networking requirements (we recommend making the Kubernetes cluster private) - Create a docker registry for your Spark docker images – and start building your own images. Is Kubernetes the same as Spark? A Kubernetes cluster consists of a set of nodes on which you can run containerized Apache Spark applications (as well any other containerized workloads). Each Spark app is fully isolated from the others and packages its own version of Spark and dependencies within a Docker image. Does Spark on Kubernetes need Hadoop? In the traditional Spark-on-YARN world, you need to have a dedicated Hadoop cluster for your Spark processing and something else for Python, R, etc.02-Jul-2020 How do you Containerize a Spark application? Containerizing your application The last step is to create a container image for our Spark application so that we can run it on Kubernetes. To containerize our app, we simply need to build and push it to Docker Hub. You'll need to have Docker running and be logged into Docker Hub as when we built the base image.11-May-2020 What is a Docker container VS image? The key difference between a Docker image vs a container is that a Docker image is a template that defines how a container will be realized. A Docker container is a runtime instance of a Docker image. The purpose of this piece is to answer the question, what is a Docker image vs.31-Oct-2021
OPCFW_CODE
from vk_api.keyboard import VkKeyboard, VkKeyboardColor from vk_api.utils import sjson_dumps KEYBOARD_TEST = { 'one_time': False, 'buttons': [ [ { 'color': 'default', 'action': { 'type': 'text', 'payload': sjson_dumps({'test': 'some_payload'}), 'label': 'Test-1' } } ], [] ] } EMPTY_KEYBOARD_TEST = {'one_time': False, 'buttons': []} keyboard = VkKeyboard() def test_keyboard(): keyboard.add_button( 'Test-1', color=VkKeyboardColor.SECONDARY, payload={'test': 'some_payload'} ) keyboard.add_line() assert keyboard.get_keyboard() == sjson_dumps(KEYBOARD_TEST) def test_empty_keyboard(): assert keyboard.get_empty_keyboard() == sjson_dumps(EMPTY_KEYBOARD_TEST)
STACK_EDU
I'm working with a swing application that does not connect to the internet. It does connect to a database. Before it can do that, it needs to read the IP address of the database from an XML config file that is resident on the local machine. If the app is ported to another computer, the IP could change which is why it's in the config. (that's out of my control). I need a class to hold the IP address so the app can connect to the database. So when the app starts, it reads the config.xml, connects to the database and performs queries. I'd also like to add some more values to the properties of this configuration class after some more queries have been run and results returned. For instance, if I were building a volkswagon, the number of seats, and engine size would be different than if I were building a Ferrari. These results would be returned from the queries. If I create a static instance variable and get an instance of the class, all the information I need would be there or I could add new information as more queries and results are returned. I'd like to avoid using setters but since the class holds a lot of information, the constructor would be ugly with all of the parameters necessary. Does this sound like a candidate for a builder pattern? My understanding is that a factory pattern would create the same type of object every time and I might not need all of the properties. I need the properties available though in case I want to build a Ferrari instead of a Volkswagon. As the user makes choices, I want to add to this class and use its properties to drive other area's of the app without having to return to the database all the time. I wouldn't create a "configuration" class that holds both the application's runtime configration options and data related to the domain. Typically you could use the Properties mechanism for the former, which can read key / value pairs from a "plain" text file or an simple XML structure. If that's not flexible enough you could always implement your own configuration mechanism based on an XML file, but I'd have a look at the Properties API first. As for the "domain configuration options", for lack of a better description, I'd be very careful on how to proceed. At first glance caching domain data retrieved from a database, which is what we're talking about, seems like a good idea that's easy to implement, but it can get complicated fast. Here, again, you may be better off using a pre-existing solution. There are serveral caching solutions that you could look at, like EHCache, but you'd still have to tie such a solution to your database access code. At the risk of sounding like a broken record, there are already frameworks that can do this for you. An OR (Object-Relational) mapper like Hibernate or EclipseLink can make use of first and second level caching strategies to minimize database access, and they make working with a database a lot less of a hassle. Although truthfully these frameworks aren't exectly trivial and can themselves be difficult to master. I would really recommend you take a look at ORM technology though. Build a man a fire, and he'll be warm for a day. Set a man on fire, and he'll be warm for the rest of his life. Thank you I'll look into ORMs. The current architecture uses all stored procs and I built the front end to retrieve the meta data to get the number of columns in the result. I build the model for the JTable using this information by putting the cloumn names in one container and the data in another. This way my JTable stays dynamic based on the results returned. Any table joins or view loo kups are done in the stored proc. Much of this code is inherited and there is already a class for reading XML so I just reused it as well as editing the existing XML file that had it's own xsd schema file. A lot of this code was written in 2003 so I'm trying to bring it up to date. (it used flat files) A lot has changed since then. There aren't any frameworks used either and I'm not sure I want to retrofit something like Spring into this. Originally I didn't think there would be any problem holding the initial database IP with things like title or backgroundImage. They aren't supposed to change over the life of the app. But I think separating them does make sense. Thanks for your advice. Every time you till, you lose 30% of your organic matter. But this tiny ad is durable: Free, earth friendly heat - from the CodeRanch trailboss
OPCFW_CODE
Part of the Flatiron School web development curriculum is a month-long Project Mode, so we as instructors advise the students on modeling their data. We always have the students map out their models and associations before beginning to code, but there are tools besides basic flowcharts that can help a project simplify its data. One of those tools is the state diagram. What is a State Diagram? Wikipedia gives an overview of state diagrams: State diagrams are used to give an abstract description of the behavior of a system. This behavior is analyzed and represented in series of events, that could occur in one or more possible states. Some more research leads to the fact that state diagrams are representations of state machines. Vaidehi Joshi wrote two excellent blog posts that explain what state machines are, as well as how to implement them in Ruby. In short: At the risk of sounding a bit philosophical, it all boils down to actions that are taken, and the reasons we take certain actions. State machines are how we keep track of different events, and control the flow between those events. First State Steps In order to get a bit more practice modeling data, I’m going to create a state diagram to represent a Movie object. From Mitch Boyer, I received this list of production states: - Development: concept, develop screenplay, build core team, funding - Pre-Production: refine script, build crew, casting, location scouting, contracts, etc. - Production: shooting the movie - Post-Production: edit, music/sound mix, VFX, color correct - Distribution: festival circuit, VOD services (video on demand), etc. That’s a lot of information! Time to break it down. I’ll start with a simple action – putting each of the five production states into nodes: Where Can We Go From Here? Rather than try to list all possible transitions between production states in a Movie, I’ll attack one node at a time. From the first node (Development), I can move forward one state (Pre-Production). From the second node (Pre-Production), I can move forward one state (Production) or backward one state (Development). I can start to see a pattern – at any point, I can move onto the next state or return to the state that came before. At any point, I can start over and return to the drawing board (Development). Represented with flowchart arrows, my diagram now looks like this: …And How Do We Get There? The next step is to figure out what actions need to be taken to move from one state onto the next one. For me, it helps to think of each state as having an entry gate – what conditions need to be fulfilled for admission to a state? Let’s take the first two nodes – Development and Pre-Production. According to our list, the development stage consists of developing a concept, developing screenplay, building core team, and funding the project. It’s impossible to move on without funding and a developed idea; those are the requirements. So if I call an action develop_and_fund, my Movie should be ready for the Pre-Production state. I can figure out my entry gates for each step forward: Naming the backwards steps is a little trickier. For example, at what point is a project no longer in Post-Production, but rather in Production again? When we need more footage. The final state diagram looks like this: Why Use State Diagrams? Simplicity. State diagrams allow us to represent the way that objects evolve as certain actions occur. Since information is represented visually, we can more easily point out its flaws – for example, we might decide that a Movie cannot move from Production back to Pre-Production, because the Production state will deplete some of the movie’s funding. We may need to rerun develop_and_fundbefore we can return to the Pre-Production state. Communication. State diagrams can help us communicate the way systems will be laid out to non-technical team members. Organization. Since state machines can only exist in one state at a time, we can easily sort our Movies by progress. Furthermore, we know that API calls to each different state will return mutually exclusive sets of objects. Hope you’ve enjoyed this simple example!
OPCFW_CODE
Though your first sentence is not actually a question, I shall refute your statement. Energy is never burned using POW or wasted, every single bit of it serves an extremely valuable purpose, no miner is wasting energy, they all get paid for their contribution and that contribution is proportional to the usefulness and value of the network as a whole. It uses ... Let me present some arguments. As an ecological dilemma Let's consider the scale of mining operations energy consumption. I will take a random mining rig consisting of 6 AMD 480 GPUs, and giving 3450h/s for my calculation. Each GPU has a TDP of 150W, and that gives us 261W/kH. Actual Monero network hashrate is 30MH/s and market cap. 90mil USD. There are two modifications: the scratchpad is only half the size of regular Cryptonight (1 MB rather than 2 MB) and the number of AES iterations is halved (half a million rather than a million). This makes a light hash about 4 times as fast as a regular one. It's a bit hard to tell how this change influences blockchain sync since different machines will ... One of the main reasons Cuckoo cycle is atractive when compared to Cryptonight is that it is very fast to verify. This is one drawback of Cryptonight: it makes all operations that need to verify Cryptonight hashes slower. Another reason to want a switch to Cuckoo cycle would be to keep the CPU/GPU/ASIC performance within a reasonable scale. Should ASICs pop ... Double spends are prevented by the use of key images, which are sent along with each output being spent in a transaction, and which are checked for uniqueness before allowing the transaction. In the Cryptonote protocol, an output's private key can be used to uniquely generate a key image in such a way that a miner can check a purported key image really is ... There is a high-level description of this algorithm at http://cpucoinlist.com/cryptocurrency-algorithms/wild-keccack/ which reports as follows: Wild Keccak is a Keccak hybrid which uses blockchain data as scratchpad. After each Keccak round, pseudo-randomly addressed [state vector used as addresses] data is taken from scratchpad and XORed with Each transaction generates a key image. In the CryptoNote protocol, key images used more than once are rejected by the blockchain as double-spends. When a new transaction is received, the miner need only verify that the key image does not already exist in the database. When your wallet is scanning the blockchain, it must check each transaction output in ... PoW is more secure than PoS in that it costs nothing to try to fork a PoS chain. An attacker can try to make as many different chains as they can with a PoS chain, but with PoW if you try to fork the chain and you fail, that orphaned block took time and energy, because your computer is hashing away, working. There is no work needed in PoS mining, therefore ... I think the overall answer is "it's ridiculously impractical to perform by hand". The Cryptonight hash operates over a 2 megabyte data space, using multiple rounds of AES along with a variety of other cryptographic hash algorithms. What human is going to have the patience to write out 2 million bytes of data even once, let alone multiple times? I would ... In versions 1 to 6 of the protocol, the CryptoNight algorithm was very roughly: state = keccak(block_data) scratchpad = fill_scratchpad(state) loop 524,288 times address = compute_address(scratchpad, state) address = compute_address(scratchpad, state) text = reduce(scratchpad, ... The role of PoW is only to order transactions chronologically, nothing else. Thing is, PoW is the only known way to have the authority on transaction ordering be decentralized. PoS can't work for that purpose. There's some good research on this: https://download.wpsoftware.net/bitcoin/pos.pdf The problem boils down to the fact that, with PoS, what you're ... I like the question because an answer to it will give a better understanding, from first principles, of the underlying algorithms. So question 1 has two parts: 1a) How does the CryptoNight PoW algorithm work at all? That's specified here: https://cryptonote.org/cns/cns008.txt 1b) How does it really work, on a low level, looking at elementary instructions?... the miner, who found the block, will be rewarded with some Moneros, in the expenses of the sender. That also happens, and it is called a transaction fee. Transaction fees are small: 0.002 XMR (per output, or Kb, I am not sure). On the other hand, what you are observing is the block reward of about 10 XMR per block found which seems wasteful at first glance,... Yes, that's basically what would happen if Monero switched to any other PoW. A good example is Vertcoin's switch from Scrypt-N to Lyra2RE (and then to Lyra2REv2). Essentially, the changeover works like every other hardfork in the network. A future block height would be picked as the changeover point, after which the proofs of work in the block would have ... I think the CryptoNote website's page about the egalitarian proof of work is about the inner working of the hash function, not about how the hash of a block is computed (which is basically cn_slow_hash(block_header + tree_hash(block_transaction_hashes)) as you thought). Internally, the Scrypt function computes blocks of pseudo-random data. Something like: Generally you can simply look at the value of the current block reward as an indication of how much electricity it takes to secure the network. At 8.7 XMR per block it costs about 78k USD in electricity to secure the network each day. Based on this, lets roughly say it is 0.15 USD / Kwh, so 78k/0.15 = 514,800 Kwh each day. cn_slow_hash is CryptoNight. cn_fast_hash is Keccak. As the names imply, the former is much slower than the latter. Both hash a contiguous buffer. Cryptonight is used for PoW and KDF, while Keccak is used for everything else. tree_hash is a merkle tree hasher: it works on a binary tree of hashes, and uses cn_fast_hash for the actual buffer hashing. Monero's genesis block dates back from April 2014. The first PoW change was in 2018. The full history is: | block | date | PoW algorithm | 0 | 2014-04-18 | Cryptonight (retroactively CNv0) | 1546000 | 2018-04-06 | Cryptonight variant 1 (CNv1) | 1685555 | 2018-10-18 | Cryptonight variant 2 (CNv2) One Monero block is (1 << 21) / 16 [Source (Lines 40, 43 and 90): https://github.com/monero-project/monero/blob/master/src/crypto/slow-hash.c#L40 ]. The Monero block reward = (M - A) * 2-20 * 10-12, where A = current circulation. Source: https://monero.stackexchange.com/a/4254/2828 . With Monero the difficulty is dynamically adjusted so that Blocks ... Quoting SChernykh (one of the CryptonightR authors): CryptonightR is a modification to Cryptonight whereas RandomX is done completely from scratch. The main purpose of CryptonightR is to be the next PoW for Monero until RandomX is ready. Which leads to why RandomX needs more auditing/testing. RandomX is a completely new PoW algorithm, not just a modified ... How to get a decimal value of difficulty (480045) by a hexadecimal value of a given target in hex (f3220000) ? Swap endian f3220000 and remove padding gets 22f3, then 0x100000001 / 0x22f3 yields: 480045. How to get a hexadecimal value of target (f3220000) by a decimal value of a given difficulty (480045)? ((2^256-1) / 480045) >> 224 is ... The simplest reason why Monero uses Proof of Work (PoW) is because it is guaranteed to work and was the only option at the time (2014). It is entirely possible that Proof of Stake consensus algorithms will dominate PoW algorithms in the future, but at the time of this answer (2017), this is not the case. Monero is using quite a number of new ... The arrival of quantum computers (QC) isn't necessarily a reason to change the PoW. It would only be necessary if the first generation of quantum computers are so fast that only a small investment is required to attack the network. That scenario would allow the first person to build / buy a QC to attack the network. If on the other hand--and I think this ... The process to compute the mining blob can be described by the following pseudo-code: miner_transaction = build_miner_transaction(...) miner_transaction_id = compute_transaction_id(miner_transaction) header = serialize(build_block_header(...)) transaction_ids = append(miner_transaction_id, other_transaction_ids...) count = serialize_varint(... The question to ask isn't 'Can this be done?' but rather 'Is this something we can benefit from?'. Since you did ask explicitly if it was possible, I'll address that first: Yes, it's absolutely possible for Monero to adopt this proposed PoW algorithm. The PoW algorithm is orthogonal to the issue of privacy, as long as the PoW algorithm chosen isn't one ... Based on this answer, an efficient device to mine Moneroj at the moment can produce ca. 6 H/s/W. That's around 20 MH/kWh. The current difficulty of the network is around 6.5 billion (average number of hashes to find a block), or ca. 200 billion hashes an hour (30 blocks per hour). So that puts the required electricity consumption at approximately 200,000 (... At block 1788000, Monero will switch PoW from CryptoNight variant 2 to variant 4 The current naming is a little misleading as we are really on variant 3 right now, with the next being variant 4. This is because variant 2 actually spanned 2 releases in quick succession. Naming aside... How is that second goal technically achieved in practice ? The best ...
OPCFW_CODE
Data Science Roundup #66: The 6 Top Data Science Articles from 2016 Happy New Year! This issue is a Roundup of Roundups; I went back through the 2016 archives and dug up the most-clicked headlines. Make sure you didn’t miss any of these posts. On a personal note: thanks to every one of you for making this newsletter a part of your week. Your time and attention are valuable and I appreciate you sharing them with me. I would love to make 2017 our best year yet. If you find the Roundup useful, the biggest way you can help is by sharing with your friends and colleagues. Thanks so much, and happy 2017! 😂 🎉 🍾 Referred by a friend? Sign up. 2016's Most Popular Data Science Articles Business intelligence tech has changed really dramatically over the past 3–4 years, and the most common question I get from folks in the industry is “What’s your analytics tech stack?” This post lays out my recommendations, from ETL to data warehousing to data modeling to analysis. There are surprisingly few people doing this right. I’ve been covering the replication crisis for over a year at this point and believe it is profoundly important. Most articles on the topic point to systemic problems, but I’ve never seen someone go so far as to assert that we’re fundamentally doing statistics incorrectly. Here’s a quote: “Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong.” This is perhaps my favorite “how I taught myself machine learning” post, specifically because it also highlights the author’s failures. Learning ML isn’t easy, especially for someone with a fairly light technical background, and learning from someone else’s mistakes is invaluable. Q1. Explain what regularization is and why it is useful. Q3. How would you validate a model you created to generate a predictive model of a quantitative outcome variable using multiple regression. This might be the best data science study guide out there. It’s also a great personal check to find your own blind spots 🙈 This post, originally written for internal consumption at Google, is absolute gold. in it, the author lays out specific technical, procedural, and social guidance for how to go about conducting analytics. My favorite quote: “Credibility is the key social value for any data scientist.” If you spend your days doing analytics, this is a must-read. We’ve all seen poor visual design of tables: left-aligned numbers? Tons of useless formatting? There’s a lot that goes into making tabular data easy to consume, and with all the attention that goes into data viz today, the UI of tabular data often gets overlooked. No longer. These Python libraries will make the crucial task of data cleaning a bit more bearable—from anonymizing datasets to wrangling dates and times. I’m personally going to check out PrettyPandas, as I definitely need more formatting control over the data tables I output for my clients. Thanks to our sponsors! Fishtown Analytics works with venture-funded startups to implement Redshift, BigQuery, Mode Analytics, and Looker. Want advanced analytics without needing to hire an entire data team? Let’s chat. Developers shouldn’t have to write ETL scripts. Consolidate your data in minutes. No API maintenance, scripting, cron jobs, or JSON wrangling required. The internet's most useful data science articles. Curated with ❤️ by Tristan Handy. If you don't want these updates anymore, please unsubscribe here. If you were forwarded this newsletter and you like it, you can subscribe here. Powered by Revue 915 Spring Garden St., Suite 500, Philadelphia, PA 19123
OPCFW_CODE
Bitesized tidbits for building Modern (Metro) apps. Monthly Archives: March 2012 March 28, 2012Posted by on Windows Phone update codenamed Tango is coming soon and with it comes devices that have a lower amount of memory in them than the usual 512mb, specifically 256mb. Your app needs to take into account whether it should be able to run on phones with 256mb of RAM and is discussed pretty well on MSDN at http://msdn.microsoft.com/en-us/library/hh855081(VS.92).aspx. So is there a quick way in your app of seeing whether the device your app is running on is 256 or 512mb? Yes, and is touched on in that MSDN link, I decided to just extend the code they give to give a quick and easy property that you can have in your app/viewmodel. March 12, 2012Posted by on The AppHub and Marketplace support private betas which as a developer is great, you get to give it to real people who might not necessarily have a developer unlocked (or ChevronWP7 unlocked) device. This is done by you submitting an app to the AppHub in the same way as you would with a normal app, but with one difference, in the first page, you change this option: Once this goes through, you can then give your beta testers a zune link and they can access the app through their phone’s MarketPlace hub. Part of the process of submitting a beta app is providing a list of Windows Live IDs, and these WLIDs are the only ones that have access to the beta. The acceptance on beta apps is a lot quicker than a normal app as there aren’t as many checks done so you should receive a confirmation within a few hours to say it’s approved and will give you your zune link. Now, here’s the problem: there’s no way of knowing exactly how long it will then take from approval to visibility, especially with how slow the marketplace is at the moment for displaying new apps. There’s no real way of seeing when your app is available in the marketplace to your testers. Or is there? March 6, 2012Posted by on So you’ve published your paid for app, you included a trial to hook some people in, great. You look at your download figures in your app list on AppHub and the figures are looking good, great! Roll on that fat royalties cheque from Microsoft. But wait a minute, you included a trial, so could these figures just all be trial downloads? They could be, but it’s not quick to find out, so let me show you. March 1, 2012Posted by on There are a few things I’ve been doing to work around a couple of scenarios when it comes to using the ListBox in Windows Phone. The first is when your ListBox has no items I want it to show something to the user stating there are no items, the other involved limiting the number of items that are enabled based on whether the app was in its Trial Mode. The first scenario had seen me using a TextBlock and a ListBox and hiding one or the other depending on how many items were in the ListBox, it wasn’t really ideal. The other scenario, well, I didn’t have any workaround for that, but I already had what I’m about to show you done for the first scenario, so modified it for the second scenario. Read on to find out more.
OPCFW_CODE
Introducing the feature you’ve been all waiting since you tried Reference Field Type – ability to add multiple items from referenced collection. Be aware that if you decide to update your current Reference field type to allow multiple items, your previously added items would not be preserved, so make sure you backup your data or start with creating a new field. There’re also some changes in how you should connect your data when working with Multiple Item Reference field in Editor. Unlike Reference field type in Single Item mode, you won’t be able to connect this field type directly to components. Instead, you need to rely on Dataset. Whenever you want to show Movies and their Actors, just add 2 datasets – one for movies and one for actors – and then apply a Filter by Dataset. Might sound confusing at first, but once you go over this article, you’ll be an expert: https://support.wix.com/en/article/working-with-multiple-item-reference-fields . Also, some more details of how Reference field in this new Multiple Item mode works can be found here: https://support.wix.com/en/article/about-referencing-multiple-items-in-one-reference-field . Some multiple item reference API: hasSome() (aka includes some of) Last but not least, we would really appreciate if you give us some feedback on this feature by filling in this anonymous 4-question form . The Google Form has a permission error Quick Question for you in code: I want to use setFilter in code for multiple items like this… .contains(“DemoCategories”, “name of category clicked”) I have two repeaters, one with buttons as categories and one with products. When I click the button I want to use the includes that you have as a filter between Datasets but in code. Possible? זה מעולה! אבל צריך להשים לב שבעבר זה גם היה אפשר רק מגדירים את השדה במסד נתונים על url כעת, כל מי שהשתמש בזה בעבר הופך לו לשגיאה! We’ll publish multiple-item-reference-specific API on Monday. Currently, you can achieve that without code, if you use table and repeater, as selected row in table changes the current item in the dataset. Alternatively, you can change current item on category dataset, and filter on another dataset will give you filtered data. Strange thing but the function $w(“#dataset1”).setCurrentItemIndex(1); does not trigger within the loop in the repeater. I’m having some trouble with updating the field That error can sometimes occur when working with Data Collections. Just keep on trying and it will probably go away soon. Looks promising. I could have used this a months ago. Instead, I developed it as 1-n relationships, (1 main row, n detail rows with 1 ref per row to master), forcing me (for result viewing) to also write my own Left Outer Join, wrap it in a array of objects and hand that to a repeater. One question: did you envision some kind of error handling for the case where Master row holds ref A, B, C and D to detail, but Details has no C (or any other of the 4), in short, inconsistency? @giri, that was the workaround to have multiple references. And you shouldn’t worry about that kind of error situation, Details will always contain A, B, C and D and will always be consistent. @tandrewnichols, the order of added items in the cell is based on the timestamp when the item is inserted. As each of the insert happens async, there’s a chance that if you quickly insert multiple items, there might happen some inconsistencies. On the other hand, when you use a Dataset to show the items from referenced collection, you can not only use Include/Exclude field on multiple item reference field, but also add additional filters to show items for example alphabetically. @Andreas, checkout API for this type of field, updated original post. the order of added items in the cell is based on the timestamp when the item is inserted @adas-bradauskas Are you referring to the date created in the reference table or is there a separate timestamp for when it’s added as a reference ? For example, if I add “apple” and then “banana” to the fruits dataset, and then in another data set with a “fruits” reference field add “banana” and then “apple” . . . what order will they be displayed in and is there a way to force them to be displayed as “banana” then “apple”? Incidentally, I’m currently trying to do this by including them in the query results with (using the same example), but that’s actually throwing an error. Specifically, with this error: Error is on newIncludedFields.push. I can fix it if I set query.included = ; first. I’m guessing that value just isn’t initialized correctly. @tandrewnichols, order is based on when items are added to referenced table. So in your example, when you have multiple item reference field for “Fruits”, you should see “banana” and then “apples”. Keep in mind that working with multiple item reference field, you’d better of using specific API, so in case to get those items you should use queryReferenced() . Thank you. I was not aware of queryReferenced. I started in with a cheer…but then pulled up with a whoa! Not finding how to get the multiple items to show up in field. Not sure how the sorting you are using is accomplishing this. So a bit confused on using the filter to get around this as you describe…more explanation is needed on this (here and on the wix page ). Not sure how this accomplishes the task…maybe no need to understand, but without that I can not visualize how to make this jump to multiple item reference. I am just wanting one column to appear right now, pulling from one dataset…will get that understood and then tackle more. So in your case, I just want to show a list of the actors that I have “listed” in the multiple-item reference field. Have tried using filters as noted and does not work at all…will not allow you connect a table to a multiple-item table no matter what filtering you use. Something not right with the examples given. Missing a step here and on wix page). And in reading back through your explanation, I believe it should be reworded. It makes it sound like you can use a filter to accomplish goal of connecting a multiple-item reference to a table. You can not. Your point was to tell folks to use the old way of connecting data (i believe). So this begs the question…what good is the multiple-item reference field when you can not connect it to anything? Where am I going wrong. Referencing field to its own database (allowed for single-item references). But not getting anywhere with trying to get multiple-item to work So what is the purpose of the multiple item reference field in Wix’s eyes? It seems a decision has to be made of using or not using if building a database, otherwise if you change, you have to completely rebuild the database. What features are in the works, because as I see it, currently it has no use as you can not connect a multi-ref to any items…there is only work around of using the “old”, single-ref fields using a dataset sort. So what has been gained?
OPCFW_CODE
Stale device status after disallowing internet access I have a robot vacuum which I was able to control locally, but after cutting it off from the internet I no longer get any status updates... Here's the script: import tinytuya import sys import time device = tinytuya.Device('DEVICE_ID', 'IP', 'LOCAL_KEY') device.set_version(3.3) request = sys.argv[1] if (request == 'status'): print('Datapoints %r' % device.status()) sys.exit() if (request == 'start'): payload=device.generate_payload(tinytuya.CONTROL, {'1': True, '2': False}) if (request == 'pause'): payload=device.generate_payload(tinytuya.CONTROL, {'2': True}) if (request == 'charge'): payload=device.generate_payload(tinytuya.CONTROL, {'4': 'chargego'}) print('Control response %r' % device._send_receive(payload)) device.send(device.generate_payload(tinytuya.UPDATEDPS)) When the device is allowed to access the internet, this is the response I get: % python3 myscript.py start Control response {'dps': {'1': True, '2': False}, 'type': 'query', 't':<PHONE_NUMBER>} Then I disconnect the device from the internet (in my firewall I block all outgoing and incoming requests to the vacuum, including the DNS requests) and now the reponse looks like this: % python3 myscript.py start Control response None The vacuum does start cleaning, it listens to all commands, however when requesting the status, it didn't update it's dps. Later down the line I added the following to test stuff out, but I never received any dps updates untill I unblocked the device's internet connection. This however is quite an invasion to my privacy imo, so I want it to be disconnected from the WAN. if (request == 'update_status'): while(True): device.send(device.generate_payload(tinytuya.UPDATEDPS)) print('Datapoints %r' % device.status()) time.sleep(1) Hi @lankhaar - I understand. Unfortunately, Tuya devices are designed to be "cloud first". The fact that we have been able to use their local APIs is a nice feature and the motivation behind projects like TinyTuiya. However, some of the devices don't behave well or have only limited functionality without the cloud. If you haven't done so already, you could try to switch to persistent connection mode and see if you get status updates: See https://github.com/jasonacox/tinytuya/blob/master/examples/monitor.py as an example. Hi @jasonacox , thank ou for your response! I have tried that already, unfortunately to no avail. Could this possibly be fixed in the future? Could this be solved the hard and tedious way by proxying the requests to a local API to mimic tuya's response? If so I'm up to give that a go. Not sure if you have an idea already how Tuya handles that? Does Tuya just confirm the updated DPs? And does tuya do this synchronously or asynchronously? It might be unrelated, but my assumption is async because if I only block inbound traffick but allow outbound traffick it still doesn't work, so taht assumes that Tuya responds in an async way. I'd love to hear from you! I'm in the same boat with my thermostats - they stop sending updates if the cloud's disconnected. Unfortunately, as @jasonacox said, these are Cloud devices and local control is an afterthought. Most complex devices like these usually have a main chip running the device and a 2nd chip just doing WiFi. The WiFi chip reports the cloud connection status to the main chip, and the main chip decides whether or not to send updates to the WiFi chip. Tuya's Cloud server is just MQTT and is secured by pre-shared key (PSK) TLS. As for work-arounds, there really aren't any good ones that I know of. The only options are: Convince the device manufacturer to update the device main firmware to not care about the cloud connection status. Patch the device main firmware yourself to not care about the cloud connection status. Dump the WiFi chip firmware, extract the PSK, and roll your own cloud server. Use something like Tuya CloudCutter to overwrite the WiFi chip PSK and roll your own cloud server. Replace the WiFi chip firmware with something like OpenBeken. Replace the whole WiFi chip with a different one. Chip support for options 4 and 5 is going to be limited. The fact that the WiFi chip is separate from the main chip makes it easier to replace just the WiFi chip without needing to mess with the chip controlling the device functions.
GITHUB_ARCHIVE
There has been a lot of attention lately around the WHATWG <video> specification recommending Theora and Vorbis as baseline codecs. The issue seems to have gained some attention since the position paper Nokia submitted to the W3C Video on the Web workshop which made it clear they didn't want Ogg included. The reference to Theora and Vorbis has since been removed from the WHATWG specification and replaced with the wording: It would be helpful for interoperability if all browsers could support the same codecs. However, there are no known codecs that satisfy all the current players: we need a codec that is known to not require per-unit or per-distributor licensing, that is compatible with the open source development model, that is of sufficient quality as to be usable, and that is not an additional submarine patent risk for large companies. This is an ongoing issue and this section will be updated once more information is available. This has also caused quite a bit of discussion around the web. From what I can see the main objection to Theora is the submarine patent issue. Theora is not 'patent-free'. Rather all known patents relating to it have been released to the public. Submarine patents are those which are unknown. They refer to the practice of keeping quiet about a patent on a technology until some company with a large amount of money implements it. Then the patent holder surfaces and attempts to get large amounts of money for it. The problem of submarine patents is not specific to Theora. All forms of software technology face the risk. H.264 seems to be the current popular technology for those that oppose the use of Theora in the specification. H.264 also has a risk of submarine patents (like any software) but companies have already exposed themselves to this risk because they ship H.264 based systems. By having to implement Theora they then open themselves up to a second avenue of risk, one which they didn't face before. One could argue that implementing anything new contains this element of risk, so following that logic these companies won't be doing any new implementations of anything. That's not the case obviously, or nothing would be done. They weigh up the risk of patent issues vs the reward of implementing things. With the current perceived low usage of Theora they probably don't see any advantage to them for implementing it vs the risk. So that appears to be why Theora is not a desired choice but some groups. I've seen questions asked in the discussions asking why don't we settle on H.264. The big companies are already using it. There is the open source x264 encoder and ffmpeg decoder. While these projects are open source my understanding is that any user of the product must pay the required license fees to the patent holders on the techology they use. In the case of H.264, this is the MPEG-LA. The summary of the terms lists the fees. There is a cap on the total amount a company should pay so why not just pay this amount and ship H.264 support? For an open source project this creates some difficulty. If the source for an H.264 decoder is included then anyone who downloads and builds from source, forks the project, or otherwise distributes a build would seem to have to deal with the licensing issues seperately. That is, the cap wouldn't cover all usage of the source. This would immediately limit the distribution of the browser and the ability to embed the engine in other products without removing the H.264 capability. With the capability removed you're back to the problem of what codec should be supported with <video>. To effectively implement the HTML <video> specification in a way that can play compatible video streams with other (closed source) browsers, you'd have to front up with that fee. It might seem that the best approach is to not specify a baseline codec for <video>. This also has problems. The big advantage of a baseline codec is a content producer can provide video in a format they they know all conforming browsers on any platform can display. Without a baseline codec, content producers will upload video in formats specific to particular platforms and we have little advantage over the existing object/embed elements. The issue of support for DRM as outlined in Nokia's position paper isn't that big of an issue. Even with a royalty-free baseline codec, implementors are still able to have other codecs supported. They can support whatever DRM specific codec is required by 'big media'. But it is important that a free baseline is available for those that just want to supply video in a convenient manner without having to pay any money. Robert O'Callahan mentions this in his recent blog posting. The W3C Video on the Web workshop starts tomorrow (Wednesday). The subject of HTML 5 and codecs is going to be discussed there. I'll be there talking about Mozilla's position and taking part in the discussion of the codec issue. For more good commentary on the issue you might like to read The HTML 5 Wars (and why you should avoid them). If you are passionate about the use of Ogg Theora and <video> one of the best things you can do is start using it. Do compelling demos. Release video in Theora format. It may be easy to use a service that provides video for you in exchange for giving them certain rights but if you want your format to succeed, then increased usage is the way.
OPCFW_CODE
Stage organized yet another game jam. This time it was a Windows Phone 7 game jam in collaboration with another club. It was also my fifth game jam in a time span of 14 months. Traditionally I've written about my experiences and/or my game, and this time is no exception. 1. The Concept Our theme this time was "opposites". I was at my friend's housewarming party when the theme was released. It's not really the best environment to come up with game ideas but at least I was able to ask a lot of people for different pairs of opposites. The game I ended up with uses opposites in a couple of ways but was more or less inspired by the possibly most ridiculous idea "left and right". My actual game idea originated from Triple Triad, which to this day is the best minigame ever. For those who don't know it, it's a collectible card game in Final Fantasy VIII with absolutely elegant gameplay. Cards are laid down on a 3x3 grid. Each card has a value from one to ten on each of its sides. When placing a card, it flips over any cards that have lower values on opposing edges. Keyword: opposing. No one was really interested in my idea after the pitching session. Most teams had been formed before it anyway while I was at the party. Fortunately for me, a group of us went to get some pizza to further develop our ideas. I refined my idea while rest of the guys were eating and started explaining it in more detail. It sounded pretty crazy at the time. The concept in all simplicity was Triple Triad meets Lumines. Tiles will drop in square blocks of four. Each tile has a number on each of its sides. Upon falling down, a tile eradicates all tiles that have lower numbers on opposing edges. Blocks will break down like in Lumines so that tiles will never hover in the air. Tiles that have equal numbers on opposing edges form chains; if one tile in a chain is destroyed, the entire chain goes with it. The concept sounded like it had enough mindfuck to be worth a try. There was just one little problem. Displaying numbers on each side of a small tile would look *messy*. First we thought to use different colors, but they would be rather hard to learn. The best idea won: use different shades of grey. It's easy to learn that the darker the color, the stronger the number. This also had another benefit which was discovered later. We judged that this project had work for only one programmer, which would be myself, so setting up the project was easy. Without even starting my laptop, I decided that there was enough of a plan to go with, and went back to the party. Here's a piece of wisdom: even though 48 hours is not a lot of time, quitting early on the first day is often a good idea. You'll have a much more refined idea of your game when you come back after a night's sleep and you still have a lot of time to work on it. Just make sure you really want to work on this idea before quitting and that it seems feasible to you, all things considered. I love block dropping puzzles and their relatives. They are challenging to design and fun to play when done well. Here's the result from this game jam. The working title was TetraBlocks (because the blocks have, well, four significant sides) and since the tiles together look like a mosaic, coming up with the final title was easy. The design on the other hand was only a partial success. The way the game was designed, i.e. tiles become active (able to destroy other tiles) every time they move, resulted in combos that made it really hard to lose. Next to impossible really. Nevertheless, the game is in fact quite nice to play and using shades of grey it is not actually that confusing. Sure, it's still a little hard to keep up with everything that's going on, but that's part of the design. Notice how using shades of grey allowed us to highlight chains with other colors. In the beginning there was one glaring problem: due to my choice of mechanics a block that has a strong value on its bottom side will typically just smash through everything else. I had to implement a mechanism against this that was not very elegant: the bottom side is weakened every time it defeats another block. The less elegant part is that this doesn't apply for other sides (again for gameplay reasons). I'm still trying to come up with ideas to put challenge into the game without adding complexity and exceptions. The key mechanic itself is promising enough to warrant further design effort. This is another lesson for jammers: don't expect a great design from your game jam. A functioning concept is often enough and really all you'll get done. 3. Love debugging, or else... Last time I was doing programming at a game jam I burned out pretty bad. This time it might have happened as well. While the game concept is quite simple in Tetraic, all sorts of rules for dropping and erasing blocks can cause a lot of bugs. And it did. I spent like half of all the time I had for debugging on Saturday. A few bugs took hours to solve. But somehow I didn't get frustrated like I usually do. The thing is this: when you are in the right mindset, debugging is in fact pretty damn fun and rewarding. Sure, you're still cleaning up your own mess but... try not to think about it too much. Everyone makes mistakes in programming - in fact, if you don't, then you're probably not learning much. Game jams are typically recipes for arcane bugs that can drive people insane. I always start with great intentions and good code design, but the end result is usually pretty far from it. There is not time for complete refactoring when it's called for. From my experiences, the best way to deal with a tough bug is to share it. It doesn't matter with whom, just talk about the bug aloud. Explain what you think happens. Doing this will help you get your own gears moving. When explaining something aloud, your brain can no longer fill in the gaps that existed in your theory while it was only in your head. This also applies to design. We have a tendency to assume things work as intended, even though nothing in a theory or design actually takes it into consideration. In debugging, we think we know what happens in the code, when we really do not fully know. We assume it does what it was intended to do. This gets especially bad when you are absolutely sure that one particular piece of your code cannot possibly be wrong. Of course it is always annoying to be unable to finish your game or polish enough because some nasty bug takes away hours of your development time. But this is what happens in game jams so learn to deal with it. Of course, having programming experience does help tremendously and even though starting with great code design is usually a vain attempt to keep things organized, it does always help. In particular, object-oriented programming is a staple in game development. Coding too many interactions into one object is a recipe for disaster. Finding the culprit is so much easier if every one of your screen object maps into a code object. It is also so much easier to modify game rules if all pieces act independently. Most importantly though, remember that debugging can also be fun. When I have the time, I'll try and continue development of Tetraic. Turning it into a fully functional design seems like a fun challenge. I'll probably port if from WP7 to HTML5. I don't even own a Windows Phone and block dropping games are much better with a gamepad or keyboard than touch screen. With all these game jams and promising prototypes it's just getting hard to decide which one to work on. Oh and the next game jam is like three months away...
OPCFW_CODE
What defines a Muslim? I have found and lost a source that states that a Muslim (as distinguished to a Believer) is someone who will not, as I remember, "attack a fellow human by hand or by mouth". To my memory the source was somewhere in the Quran, or at least in the sayings of Muhammad. Please give me a reference if this is true or not. But please note that this is not the definition of a Muslim and is also not what distinguishes a Muslim from a Mumin. Rather it is one characteristic of a true Muslim. https://sunnah.com/bukhari:11 You are refering to Abu Hurairah Hadith, "The Muslim is the one from whose tongue and hand the people are safe". So, being a Muslim is more than being a Believer? This is the content of a Hadeeth from the holy prophet PBUHH, which says: رسولُ اللّه ِ صلى الله عليه و آله: المُسلِمُ مَن سَلِمَ المُسلمونَ مِن لِسانِهِ و يَدِهِ a Muslim is one the Muslims be safe of whose tongue and hand and also from Imam Sadiq PBUH as: الإمامُ الصّادقُ عليه السلام: المُسلمُ مَن سَلِمَ الناسُ مِن يَدِهِ و لِسانِهِ ، و المُؤمِنُ مَنِ ائتَمَنَهُ الناسُ على أموالِهِم و أنفُسِهِم a Muslim is one the people be safe of whose hand and tongue, and a believer is one people feel safe to entrust their goods and lives and yet again from the holy Prophet PBUHH as: قـالَ النّبِيّ صلي الله عليه و آله: اَلْمُؤْمِنُ مَنْ اَمِنَهُ المُسلِمونَ عَلى اَمْوالِهِمْ وَدِمائِهِمْ وَالْمُسْلِمُ مَنْ سَلِمَ الْمُسْلِمُونَ مِنْ يَدِهِ وَ لِسانِهِ، وَ الْمُهاجِرُ مَنْ هَجَرَ السَّيّئاتِ a believer is one Muslims feel safe to entrust their goods and bloods (lives), and a Muslim if one Muslims be safe of whose hand and tongue, and an immigrant who forsake is one who forsake what is the wrongdoings ... and maybe there are more ... Also note that the very wordings might sometimes be somewhat doubtfully exactly what the original reciter has stated ... By the way, Islam is the religion to be Tasleem (submit to Allah), so being a Muslim means being aligned in such a system. A Muslims might be a believer or not, but a believer is indeed a Muslim, as being a Muslim is more general and being a believer is more specific.
STACK_EXCHANGE
I have an application which I've already implement Resque for background jobs. Alos, I'm using gem resque-status and they are all work smoothly on My application is a Sinatra app and I've made my Rakefile and here its require "resque/tasks" require "resque_scheduler/tasks" Now, I'm trying to run the application under jruby 1.6.3 (ruby-1.8.7-p330), but unfortunately, I got a problem when I tried to run COUNT=2 VERBOSE=true QUEUE=* rake resque:workers. The terminal threw this error "after adding --trace to the rake command" rake aborted! can't convert Class into String org/jruby/RubyFile.java:872:in `basename' org/jruby/RubyFile.java:1069:in `extname' (eval):3:in `extname_with_potential_path_argument' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:561:in `load_imports' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:502:in `raw_load_rakefile' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:78:in `load_rakefile' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:129:in `standard_exception_handling' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:77:in `load_rakefile' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:61:in `run' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:129:in `standard_exception_handling' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/lib/rake/application.rb:59:in `run' /Users/amr/.rvm/gems/jruby-1.6.3@global/gems/rake-0.9.2/bin/rake:32:in `(root)' org/jruby/RubyKernel.java:1063:in `load' /Users/amr/.rvm/gems/jruby-1.6.3@global/bin/rake:19:in `(root)' I tracked My projects files to find that when I remove require "resque/job_with_status" from some file It all work well and get the expected error i.e. Resque::JobWithStatus couldn't be found. I tried to run jruby --1.9 -S rake ... but it also doesn't work! So is there any way to get resque-status to work with jRuby? and I've already opened an issue on github: https://github.com/quirkey/resque-status/issues/45 Thanks in advance
OPCFW_CODE
the flac decoder in 11.1 Amarok Playing my CD rips (flac) in Amarok has issues with something like skips. little short dropouts in the sound. Using Suse 10.3 (still on the box) does not have this issue. System: Suse 11.1, KDE 3.5 recent install and update. 32 bit on 32 bit platform, old Athlon 2500+ Asus MB, 1 GB ram, two 120 GB internal, Audigy SC. I am running fedora 11 with kde version 4.4 amarok 2.2.2 and xine 2.6. I have xine-lib-extras-freeworld installed from rpm fusion. Amarok will refuse to play mp3 files, however M4A files work just fine. I initially thought that the xine-lib-extras-nonfree was required for mp3 playback but it seems that my laptop which is f11 kde 4.3 and amarok 2.2.1 does not have this installed and works just fine. When attempting to play an mp3 in gxine it throws this error: "No demuxer found - stream format not recognised." I have a problem with Amarok. I did a yum update today (which completed ok) but now I cannot play music from Amarok. Rather, it appears as if it is playing but no sound is coming out of the speakers. Other players (xmms, mplayer, xine) are all working fine. A quick note is that on starting up, there is a message that "Phonon: KDE's multimedia library The audio playback device PulseAudio sound server does not work. falling back to." A google search shows that I may need to configure amarok to output sound using something like Alsa. Which is fine, I have alsa installed: I'm running Amarok 1.4 on Linux Ubuntu 9.10, and out of the blue it tells me it cannot play .mp3 files. When it asks if I want .mp3 support, I click "Yes", then nothing happens. flac, .m4u, & other files play fine. I have got Fedora 13 x86_64 with KDE. I installed the Fluendo MP3 Plug-In so I could play songs in Amarok. But the first ~15 seconds of every song are really lagging, the rest of the song sounds fine. I tried to set it to an external MySQL database, but then all my music disappears. My notebook is using intel core 2 duo T6400 CPU and 2GB Ram, when I am playing MP3 with Amarok and at the same time running Firefox, the computer tends to hang and slow in response to open new website.IS it due to my hardware? how to get Amarok 1.X installed on 11.2? I updated to 11.2; and as far as I can tell it's only got A2. I am REALLY DISPLEASED with A2 and want to go back to 1.4. I couldn't 1.4 any where for downloads; as far as repos go. I am not having any luck with the 'out-of-the-box' version of Amarok on 11.2. It is missing a lot of features (syncing with my fuze most notably). So I figured that I would go to the latest version, but the one-click install from backports (2.2.2-4.1) reported that it could not install amarok-xine and amarok-libvisual. It installed anyway, but was not stable at all. I thought that that install process was supposed to do the updates for me. I went back to the 2.1.1 version, but it still crashes pretty frequently. Trying banshee right now. Anyway, the big question is... do I go up, down, or off for Amarok? I have a Sansa Fuze and would love to sync music with it, but Amarok is not getting the job done right now. I have read of people going back to 1.4, but after my attempt to install 2.2.2, I'm not so sure that I'm doing it right. Should what I did have worked? I'd like to find one that will get album art and transfer it to my fuze as well (folder.jpg is what seems to make it the happiest). I'm using Amarok 1.4 and when I select the playlist=>burn to CD, k3b doesn't apear. I get the prompt that asks if I want to make an audio or data disk. Only the Audio CD works; the data disk option doesn't work. I'm running OS 11.2x86_32, KDE 4.4.3 with the latest updates and a problem has cropped up with Amarok (version 2.3.0). When starting the app it keeps saying there is no mp3 support and do I want to install it. Tell it yes, and restart the program and it says the same thing at each start. What's strange is that mp3's play just fine. I've run the 10 steps for troubleshooting and all the codecs are fine, this appears to be an Amarok only problem. SMplayer works fine, Kaffeine and VLC too. I have a vast amount of MP3's but I can't seem to get Amarok to play them, Amarok opens up and I can see all my music in the music folder with no problem but when I select an MP3 and select Play I get nothing. I can convert my CD collection to OGG.VORBIS but I don't want to go through that process all over again. Currently I'm using 11.2 64bit, suddenly I just could not get it to run. The coloured startup came up but nothing else. So I ran it as su from a terminal and this was the result:<unknown program name>(6552)/:KUniqueApplication: Cannot find the D-Bus session server: "Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken." <unknown program name>(6551)/: KUniqueApplication: Pipe closed unexpectedly I'm having trouble playing some mp3's through Amarok, all other media players are fine with the mp3's in question. Tested on VLC and Audacious. Most other mp3's play through Amarok though. Why this problem. I'm using the gstreamer backend. I tried changing it to xine but then nothing would play. I really like Amarok and want to use it as my main player, so this issue is a little irritating. I run Amarok 2.4.3 from KDR 4.7. I noticed that Last.fm is not listed among the services.Nor is the usual "Like" icon displayed in the middle pane. And Amarok doesn't scrobble. I have liblastfm0 installed and I installed liblastfm (no "0") from KDR yesterday, but that didn't help. So these should all be from the same repo (can't check right now as package kit is doing updates). Any clue what I have to do to get the service back? I use openSUSE 11.4 and KDE 4.7 from the Release repo. After upgrading my Lenovo T61 to openSUSE 11.2 (64 bit)/KDE 4.3.1.playing mp3/Internet radio (Last.fm) or DVDs using VLC works fine.When trying to play mp3s/Last.fm with Amarok 2.1.1 or when trying to play DVDs using LinDVD there is no sound at all although each player appears to be processing each of the media correctly. When testing sound output with the Amarok settings (backend Xine) I get test sound output with these device types - HDA Intel (AD198xAnalog) - Pulse Audio But none with - HDA Intel (AD198xDigital) - HDA Intel, AD198x Digital(IEC (S/PDIF) Digital Video Output). Could it be that some other sound server, such as ALSA etc is blocking/competing with the audio hardware? How do I get Amarok/LinDVD to output sound? Isn't the scrobbler (or whatever it is called) built into the newer Amarok as it used to? I recently upgraded to 2.3.1 and I can not find any possibility to connect with Last.fm. Do one have to install it separately instead?I am on OS 11.2 and upgraded through KDE application update repo. I listen to some voice mail messages over the web. Firefox downloads the message as a ".wav" file, and then invokes amarok to play it.What I have been noticing, is that amarok seems to cut the message short.As an additional test, today after playing a message on amarok, I tried playing the same message with kaffeine. There was significantly more to the message when played on kaffeine. The particular message ended with the phone number I should call back (if I wanted to). None of that ending sentence showed in the amarok playback Suse 11.3 KDE 4.4 Amarok - Standard version that comes with 11.3 I start Amarok, which still worked fine yesterday and this morning. I get the Amarok splash screen and then it disappears and the Crash Handler with the following message appears. I've had problems with Amarok crashing so I reinstalled. I think its the Nvidia Driver, so I read on another thread. I tried the uninstall and init 3 thing and needed gcc and a few more files that were not installed. In the end I had enough and reinstalled. Now Amarok runs. If I install the nvidia driver I have an idea that it will crash like Ayrton Senna, or maybe quicker. so is there a simple work around. I'm having an issue with playback in Amarok 2.3 on openSUSE 11.3. When using xine, Amarok will playback songs very quickly, probably something like 3X-4X faster than it should be. There is also no audio output when it does this. Using gstreamer it plays back at normal speed, but also without audio output. I've done some searching for this problem, but haven't found anything helpful. Any ideas are greatly appreciated. Below I'm listing all the multimedia/codec packages that I have installed.
OPCFW_CODE
import _ from 'lodash'; import { ElementIdentifier, ElementFragment } from '../identifier/interfaces'; import { toAbsoluteXPath, toUniqueXPath, toSiblingsXPath, toAncestorXPath, greedyXPathFromFragments, } from '../xpath'; export const getElement = ( identifier: ElementIdentifier, ignoreClassNames: string[] = [], document: Document = window.document ): Element | undefined => { const last = _.last(identifier.absolute); if (!last) { return undefined; } const xpath = toAbsoluteXPath(identifier); const result = evaluateXPath(xpath, document, document); if ( result.length === 1 && isMatchedAttributes(result[0], last, ignoreClassNames) ) { return result[0]; } else { return undefined; } }; export const findElements = ( identifier: ElementIdentifier, ignoreClassNames: string[] = [], document: Document = window.document ): Element[] => { const last = _.last(identifier.absolute); if (!last) { return []; } const strictElement = getElement(identifier, ignoreClassNames, document); if (strictElement) { return [strictElement]; } const uniqueXPath = toUniqueXPath(identifier); const uniqueResult = evaluateXPath(uniqueXPath, document, document); const uniqueMatched = uniqueResult.filter((e) => isMatchedAttributes(e, last, ignoreClassNames) ); if (uniqueMatched.length === 1) { return uniqueMatched; } let elements: Element[] = []; let fragments: ElementFragment[] = []; for (let i = 0; i < identifier.absolute.length; i++) { const fragment = identifier.absolute[identifier.absolute.length - 1 - i]; fragments = [fragment, ...fragments]; const xpath = greedyXPathFromFragments(fragments); const elems = evaluateXPath(xpath, document, document); const matched = elems.filter((e) => isMatchedAttributes(e, last, ignoreClassNames) ); if (matched.length === 1) { elements = matched; break; } else if (matched.length > 1) { elements = matched; } } return elements; }; export const findElementsWithPredicate = ( identifier: ElementIdentifier, predicate: (element: Element) => boolean, document: Document = window.document ): Element[] => { const last = _.last(identifier.absolute); if (!last) { return []; } const uniqueXPath = toUniqueXPath(identifier); const uniqueResult = evaluateXPath(uniqueXPath, document, document); const uniqueMatched = uniqueResult.filter(predicate); if (uniqueMatched.length === 1) { return uniqueMatched; } let elements: Element[] = []; let fragments: ElementFragment[] = []; for (let i = 0; i < identifier.absolute.length; i++) { const fragment = identifier.absolute[identifier.absolute.length - 1 - i]; fragments = [fragment, ...fragments]; const xpath = greedyXPathFromFragments(fragments); const elems = evaluateXPath(xpath, document, document); const matched = elems.filter(predicate); if (matched.length === 1) { elements = matched; break; } else if (matched.length > 1) { elements = matched; } } return elements; }; export const getSiblingsElements = ( identifier: ElementIdentifier | ElementIdentifier[], ignoreClassNames: string[] = [], document: Document = window.document ): Element[] => { const identifiers = !Array.isArray(identifier) ? [identifier] : identifier; const lasts = identifiers .map((i) => _.last(i.absolute)) .filter((f) => typeof f !== 'undefined') as ElementFragment[]; if (lasts.length === 0) { return []; } const xpath = toSiblingsXPath(identifier); const elements = evaluateXPath(xpath, document, document); return elements.filter((e) => { return lasts.some((f) => isMatchedAttributes(e, f, ignoreClassNames)); }); }; export const getMultipleSiblingsElements = ( identifiers: ElementIdentifier[][], ignoreClassNames: string[] = [], document: Document = window.document ): (Element | undefined)[][] => { if (identifiers.length === 0) { return []; } if (identifiers.length === 1) { const result = getSiblingsElements( identifiers[0], ignoreClassNames, document ); return [result]; } const ancestorXPath = toAncestorXPath(identifiers); const ancestorElements = evaluateXPath(ancestorXPath, document, document); const elementsArray = identifiers.map((identifier) => { return getSiblingsElements(identifier, ignoreClassNames, document); }); const maxLength = elementsArray.reduce((acc, current) => { return current.length > acc ? current.length : acc; }, 0); if (ancestorElements.length === maxLength) { const grouped = ancestorElements.map((ancestor) => { return elementsArray.map((elements) => { return elements.find( (e) => // tslint:disable-next-line ancestor.compareDocumentPosition(e) & 16 //Node.DOCUMENT_POSITION_CONTAINED_BY ); }); }); return _.zip(...grouped); } else { return elementsArray; } }; const isMatchedAttributes = ( element: Element, fragment: ElementFragment, ignoreClassNames: string[] ): boolean => { return ( isMatchedId(element, fragment) && isMatchedClassNames(element, fragment, ignoreClassNames) && isMatchedRoles(element, fragment) ); }; const isMatchedId = (element: Element, fragment: ElementFragment): boolean => { const id = element.id.length > 0 ? element.id : undefined; return fragment.id === id; }; const isMatchedClassNames = ( element: Element, fragment: ElementFragment, ignoreClassNames: string[] ): boolean => { const classNames = Array.from(element.classList).filter( (cn) => !ignoreClassNames.includes(cn) ); if (fragment.classNames.length === 0 && classNames.length === 0) { return true; } const intersected = _.intersection(fragment.classNames, classNames); return intersected.length > 0; }; const isMatchedRoles = ( element: Element, fragment: ElementFragment ): boolean => { const roleString = element.getAttribute('role'); const roles = roleString ? roleString.split(' ') : []; if (fragment.roles.length === 0 && roles.length === 0) { return true; } const intersected = _.intersection(fragment.roles, roles); return intersected.length > 0; }; export const evaluateXPath = ( xpath: string, root: Node = window.document, document: Document = window.document ): Element[] => { const result = document.evaluate( root === document && !/^\./.test(xpath) ? xpath : `.${xpath}`, root, null, 7, // XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null ); const elements: Element[] = []; for (let i = 0; i < result.snapshotLength; i++) { const node = result.snapshotItem(i); if (node && node.nodeType === 1) { // Node.ELEMENT_NODE elements.push(node as Element); } } return elements; };
STACK_EDU
Feb 22, 2007 Koyoto Potato is a way of tracking your idle time, and trading it with other people so that you can do your bit to reduce greenhouse gas emissions (or to trade your excess idle time with others). I don't know if I'm explaining it too well, but have a look at the application and give it a go. Every little bit counts. Feb 12, 2007 The Zelda one is really quite cool, especially when the "actor" does one of Links famous whirlwind sword moves (and when he's cutting bushes looking for rupees). Check it out at the thinking blog. What to do? What to do? Well, you could always go doorknocking in nearby streets, hang around at the local ebgames store and ask people to play as they enter or leave the store (though you might get into trouble with the law) or you could put an ad in the local newspaper claiming how lonely you are and that you just want some meet some new Mii's, but they're probably not the best options you could choose. Then again, there's always MapWii. MapWii is a pretty cool little mashup that links people with Wii's together. You can register your location (as imprecisely as you like) and you'll turn up on their search page allowing you to be found by other people. It's pretty cool and quite useful. So, what are you waiting for - get to it! Feb 9, 2007 On a few occasions in the past I've been asked about some of the problems you can have in implementing scrum. Something about having already made (and learned from) plenty of mistakes seems to make me qualified to give advice on what not to do, and to offer a few tips on approaches to take. There are plenty of other resources online that can help people but trawling through mailing lists looking for the information you need but these things take time and most mailing list threads degenerate from their original subject, and newbie questions can sometimes raise the ire of others. And then of course there's just no substitute for having a one-on-one conversation with someone else about the experiences they've had (i.e. having a mentor) Paul, one of my former staff, left a while ago to join a large corporate in a team leader role. It's a good opportunity and gives him the chance to stretch himself, which should be what anyone looks for in any job. Unfortunately the organisation it still stuck using the traditional waterfall style SDLC and he has a real problem with his team members being unmotivated, uncaring and lethargic. What to do? What to do? Well, it was obvious that the current way of doing things wasn't working so he needed to shake things up. Having come from a scrum based environment where agile was working, and working reasonably well, he decided it might be worth a try. He's only just started the initial introduction of scrum with his team, and given their reactions I'd say he's got his work cut out for him. Before he got started though, he decided it would be worth asking a few questions to see how things could be done. Here's some (edited) parts of the conversation that you might find useful... [Paul]: I am thinking of implementing AGILE within my team on a small scale, can this work with a development team of 4? Also what tools could I use to track (that are free) their progress such as how many hours are left on a task level, comments etc. I also want to provide the business after each project the differences between what we estimated for and the actual development so we can get an idea if we are over estimating or not. [Richard]: Agile for a team of 4 will work fine. The best tool for progress tracking is either a whiteboard (seriously!) or Excel - as you can hack a spreadsheet around to suit your needs pretty easily. There’s other tools out there such as scrumworks from Danube but I wouldn’t get too excited about that sort of thing just yet as you need to get the team used to agile first. I assume your tracking of actual vs estimates will be based off timesheets or some other time tracking system. If so, try to make sure that the time tracking has a reference back to the sprint item you’re working on. If not it can be a real pain to try and marry them back up. [Paul]: Thanks for the info about the tools. Reference to the timesheets vs estimates, we only track again project codes, not tasks which makes it harder to track so I will need another approach for tracking this. [Richard]: You might want to try using a simple alternative then. Maybe just another excel sheet that mimics the sprint backlog and where the team can stick in the actual time spent. Don’t put it on the same sheet as the backlog though, as the backlog is for burn down and estimated time remaining, while the other is actual time spent. 2 very different things [Paul]: Though my company cannot truly support SCRUM in the sense of requirements, priority etc (requirements are done independently to us) but be able to implement the same principle within my team. We currently work in a Waterfall style approach (in general) and each team (being me or another team) are given requirements as a whole. So what I want to do is to have many small sprints (lets say 2 weeks),so at the start of each sprint each developer (like we did) choose which requirement they want to do and they go away and do it. At the end of the sprint, we go and do a demo of what they did and to show the business their progress. Any feedback at stage would go into the next sprint. [Richard]: Take little steps. One suggestion though – don’t give the team the choice of what requirement to do. Tell them the order of the requirements (you set the priorities) and get them to tell you how long it will take. You can obviously do this sprint by sprint, but you might also want to consider getting a high level “run over the target” first. That is, when the requirements first land on your desk get the whole team together and do a quick ballpark estimation of each item – it will give you a feel for wether the requirements are even possible in the timeframe the PM has given you. If it is – great. If not – you’ll have some negotiation to do. What you should also do is track the velocity at which you are completing those initial estimates (don’t change them) so that as sprints progress you have a more accurate sense of when you’ll be done. The demo is very important! Make sure the PM, BA’s and the project owner (if possible) are all available to see the demo and have a hands on session. The open communications will get you loved by all (even when sprints fail). [Paul]: I was also thinking have a once a week (for now) on a Monday morning that my team gets together for 15 minutes and to discuss what we have done for last week and any issues they may have. [Richard]: I’d strongly recommend daily. Daily will help encourage teamwork – once a week will seem more like just another meeting and won’t do anything to build communication between people. [Paul]: Once my team have got used to it, hopefully they can see how well we work together. [Richard]: Hopefully :-) If you remember what we did, we actually did a mini-sprint first up. A 3-day experiment to prove we could do it, and to prove we could have something tested and demoable in that timeframe. It might be worth trying a 1-week sprint or 2-day one first up. Also, you should think about spending time explaining the why’s and wherefore’s of it all first. If you want presentations or overviews you can borrow mine or use the resources from mountaingoat software (they’re quite good). Be prepared for the first few sprints to be a bit awkward and to fail. It takes time to get into a rhythm with this stuff. You might also want to ask around and see if any one has worked in agile teams before. [Paul]: All our project(s) are release on a three month cycle, for example, all development must be done by X date which is when we hand over to to the test team etc. My first sprint is going to be next week (Monday) for 2 days and another 2 day sprint, then 5 days then 14 onwards (depending on project). At end of each sprint (after the dummy run) I will get my team and the main BA in one room to perform a demonstration to them and get feedback. [Richard]: That’s a good plan. As long as the main BA fills the “product owner” role you should be fine, but you may need to talk it over with them first. Paul has now commenced the initial sprints with his team and I'm curious to see what lessons he and his team will learn as they take this journey. If you have questions you'd like to ask about Scrum and how to implement it the best way is ask people you know who've done it, ask in the newsgroups, or feel free to ask me. I'm always open to talking to other people about this sort of stuff and I'd love to help you. Feb 8, 2007 Culturally, these days if you don't "slip, slop, slap" then most people think you're a bit thin on grey cells, but when I was a kid there was almost no skin cancer awareness and I spent lots of time in the sun and regularly got badly sunburned. Well, today I've payed a little more of the price of getting burnt in those early years and had another skin cancer removed (and another $280 removed from my wallet). I went to the specialist to check on a thing on my leg which looked suspicious only to be told it was a wart - bizarre! The doc froze it off anyway which is nice, 'cause I was overly conscious of it, but he then checked the rest of me out and found one in my hair (what little remains) and one on my back. The one on my back got "scraped off" under a local anaesthetic and the one in my hair I need to use a cream for. It's the third time I've had skin cancers removed and I'm only 35. I guess I'll have a few more removed by the time I'm done; I'll just need to keep a close eye on things so that I don't end up a statistic or with horrible surgery. Searching on Flickr for "Skin Cancer" produces some rather horrific images. Needless to say, Anne and I make sure our girls get covered up when they go in the sun, and these days I do my best to avoid it all together (I like to preserve my office tan!). Feb 5, 2007 There's an awesome WPF/E Vista mockup on the web, complete with Office 2007 applications and other cool stuff. Check out the screenshot below. I've tried it on both IE 7 on the PC and Firefox on the Mac and both work really well. I'm very impressed!
OPCFW_CODE
Sharing to Reminders uses a weird title sometimes Hit Share from article Choose Reminders.app Hit Add button Later, look in Reminders Title of reminder is "See first unread article", Notes has the actual title of the article you shared, URLis correct. I can't make this happen on demand from my iPad as I type this, but it seems to happen much (maybe most?) of the time on my iPhone when saving articles to read later. I'm not sure I understand the problem. Could you attach some screenshots that show the behavior you see as incorrect? Turns out I can’t make it happen from iPhone, either. But it comes up every few days, so I’ll include an image next time I see this. One thing that would be interesting is whether the string “See first unread article“ appears anywhere in the code. It seems like it ought to (vs. that coming from Reminders) and that may provide a hint about how to trigger this. Yep, here it is: https://github.com/Ranchero-Software/NetNewsWire/blob/e8045b0e8bfeef69f76301086d70e466d4e810c6/Shared/Activity/ActivityManager.swift#L72 I’ll read through some of this code this evening to see if I can figure out what to do to make this happen reliably. Are you using Siri Shortcuts for anything? Yes I do, but not with this app. This is a pretty interesting bug. It implies that there is some interaction happening between the share sheet dialog and donated user activities. I wonder if we are triggering an issue with how Handoff uses the clipboard. This is most likely an Apple bug, but we still try to work around those as much as we can. If you can figure out how to consistently trigger this, we might be able do something about it. Thanks, I’ll let you know! It happened just now! I have an image, but the GitHub iOS client doesn’t seem to let me paste it. I’ll add it shortly. I was paying attention this time and can tell that the problem is present as soon as the share sheet comes up. Furthermore, if I cancel and share again - either within the share sheet or from the app itself - the same problem is present. I don’t know if it’s relevant, but this is the article it happened for: https://daringfireball.net/linked/2020/03/30/rene-solo. Switching to another article shares fine and switching back to the above one remains broken. Perhaps it’s something to do with the way Gruber links out his titles? Here is the image I refer to in my comment above: In case it matters, I do all my article reading from the “All Unread“ view, pressing the “next article“ button in the middle of the bottom toolbar. It doesn't look like you used the share button in the toolbar. Just to be sure, you long pressed the youtube link and shared that using the context menu to Reminders right? While viewing the article, I am hitting the “box with an up arrow“ button that is at the right side of the bottom toolbar with five items. The problem is no longer reproducing for me with that article when I went back to the “Today“ section and tried again just now. I'm able to replicate the bug now. You have to have previously hit the next unread button for it to happen. This one is so weird. The last donated NSUserActivity always overrode what ever was suggested for the title of the shared item. I removed the NSUserActivity donation for next unread. It would be nice for Siri to suggest this to the user, but not at the price of the share sheet being broken. I'm not sure we won't run into this problem in other places that we are donating NSUserActivites and the share sheet is available. Anyway, I've cleared this one up and the fix will be in the next iOS release (5.0.1). Great, thank you for sticking with me!
GITHUB_ARCHIVE
// // dummy publisher // /* #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <signal.h> */ #include "mb.h" // generated code and headers #define MY_TOPIC "Voltage" // DDS topic name static volatile int do_loop = 1; // global variables static void sigint_handler (int sig) // signal handler { do_loop = 0; } int main (int argc, char *argv[]) { // Change signal disposition struct sigaction sat; sat.sa_handler = sigint_handler; sigemptyset (&sat.sa_mask); sat.sa_flags = 0; sigaction (SIGINT, &sat, NULL); // Declare dds entities ------------------------ int status; dds_qos_t* qos = NULL; dds_entity_t domain_participant = NULL; dds_entity_t voltage_topic = NULL; dds_entity_t voltage_publisher = NULL; dds_entity_t voltage_writer = NULL; // Initialize DDS ------------------------ status = dds_init (0, NULL); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); // Create participant status = dds_participant_create ( // factory method to create domain participant &domain_participant, // pointer to created domain participant entity DDS_DOMAIN_DEFAULT, // domain id (DDS_DOMAIN_DEFAULT = -1) qos, // Qos on created domain participant (can be NULL) NULL // Listener on created domain participant (can be NULL) ); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); // Create a publisher status = dds_publisher_create ( // factory method to create publisher domain_participant, // domain participant entity &voltage_publisher, // pointer to created publisher entity qos, // Qos on created publisher (can be NULL) NULL // Listener on created publisher (can be NULL) ); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); // Create topic for writer status = dds_topic_create ( // factory method to create topic domain_participant, // domain participant entity &voltage_topic, // pointer to created topic entity &Modbus_voltage_desc, // pointer to IDL generated topic descriptor MY_TOPIC, // name of created topic NULL, // Qos on created topic (can be NULL) NULL // Listener on created topic (can be NULL) ); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); // Create writer without Qos status = dds_writer_create ( // factory method to create typed writer domain_participant, // domain participant entity or publisher entity &voltage_writer, // pointer to created writer entity voltage_topic, // topic entity NULL, // Qos on created writer (can be NULL) NULL // Listener on created writer (can be NULL) ); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); // Prepare samples ------------------------ Modbus_voltage writer_msg; writer_msg.id = 3; srand ((unsigned int)time(NULL)); float a = 5.0; while ( do_loop ) { // dds write writer_msg.val = ((float)rand()/(float)(RAND_MAX)) * a; status = dds_write ( voltage_writer, // writer entity &writer_msg // pointer to topic structure ); DDS_ERR_CHECK (status, DDS_CHECK_REPORT | DDS_CHECK_EXIT); printf ("write: %f\n", writer_msg.val); dds_sleepfor (DDS_MSECS(200)); } // release resources printf ("Sanitize\n"); dds_entity_delete (domain_participant); dds_fini (); exit (0); }
STACK_EDU
Date: Sun, 10 Dec 2017 07:45:10 -0500 From: Jeffrey Walton <noloader@...il.com> To: oss-security@...ts.openwall.com Subject: Re: Re: Recommendations GnuPG-2 replacement Hi Marcus, Sorry to go off-list. Regarding: > These should all be blog entries. In fact, I commented on CMake here: > https://neopg.io/blog/why-cmake/ The short version is: cmake has much > less boilerplate, more stable interfaces, and it is snappier to use > during development. It is also well supported by all five major > platforms (Windows, MacOS, Linux, Android and iOS). We had so many problems with Cmake we had to drop it. It accounted for nearly 20% of our bugs. We could not even set a "C++ project" (i.e., 'project(cryptopp, CXX)') without breaking Cmake. Also see https://www.cryptopp.com/wiki/CMake#CMake_Removal. Regarding: > I am not per se opposed to a multi-process design, but I'd rather have > short-lived processes that are started for a single task (like > decrypting a single message) than long-running daemons... I think the library made a good design decision by moving secret key operations out-of-process and then interfacing through a message passing interface (i.e., Libassuan). In theory a compromise of the web server should not yield secret keys because the keys are in another process. Good luck with the replacement. I really like Jack Lloyd's Botan. Its a very nice library. Related, here are some of the upcoming engineering goals for Botan: https://lists.randombit.net/pipermail/botan-devel/2017-November/002242.html Jeff On Fri, Dec 8, 2017 at 7:47 AM, Marcus Brinkmann <marcus.brinkmann@...r-uni-bochum.de> wrote: > On 12/08/2017 12:01 PM, Ludovic Courtès wrote: >> Hi Marcus, >> >> Marcus Brinkmann <marcus.brinkmann@...r-uni-bochum.de> skribis: >> >>> I started neopg.io two months ago to provide a modern replacement for >>> GnuPG. It will go back to a single-binary architecture like gpg1 was, >>> but move forward on just about every other issue: >>> >>> * Written in C++ >>> * based on the Botan crypto library instead of libgcrypt >>> * typical library + CLI (with subcommands) architecture >>> * better testing (CI, static analysis) >> >> Given that you worked on GnuPG, can you give some background? It isn’t >> clear to me why using C++/Botan/CMake to give a “modern” feel (what does >> it mean?) will lead to “better” software (under which criteria?). > > These should all be blog entries. In fact, I commented on CMake here: > https://neopg.io/blog/why-cmake/ The short version is: cmake has much > less boilerplate, more stable interfaces, and it is snappier to use > during development. It is also well supported by all five major > platforms (Windows, MacOS, Linux, Android and iOS). > > Efficiency is the major theme here. I am a good programmer, I can solve > all the problems that C++, Botan and CMake solve for me. But it doesn't > make sense, because then I would be bogged down in tangent issues that > don't help the users. > > For C++: if you look at GnuPG source code, a huge part of it is about > memory management. For example, there are several implementations of a > dynamically growing string buffer (membuf_t, es_fopenmem, several ad-hoc > implementations based on realloc). The iobuf_t filter/pipe mechanism is > object-oriented. The libgcrypt API is object oriented. In theory, you > can write nice code in any language. In practice, C++ has solved all > these problems years ago, and the language is evolving to include new > features (C++11, C++14, C++17), while C has stalled. With C++ STL and > boost, you can kick out most platform dependent code. std::mutex and > std::thread are now the same on Windows and Unix. Boost::locale > replaces iconv and gettext. It is much more efficient to program in C++ > than in C. (BTW, C++ is a compromise. I have a love-hate relationship > with the language, but I am picking languages for the job at hand, and > for a fork of GnuPG, it is the obvious choice to me). > > For Botan: libgcrypt is a major maintenance burden on the GnuPG project. > There have also been several embarassing CVEs this year, and crypto > researchers have commented negatively on Twitter. To justify that, > you'd expect the library to be used by many. Unfortunately, libgcrypt > has never seen much use outside the GnuPG project. The only other major > user I am aware of was gnutls, which switched to libnettle in 2011 > (http://lists.gnu.org/archive/html/gnutls-devel/2011-02/msg00079.html). > I also don't like how libgcrypt handles entropy. It makes a difference > between "weak" random and "strong" random, and it will block if it can't > get enough "entropy" from the system. It is a very conservative > approach, and leads to bad user experience. > > Botan on the other hand is actively developed, and provides several very > useful interfaces that not only replace libgcrypt for me, but also the > iobuf (pipe/filter) interface in GnuPG, libksba (GnuPG's ASN.1 parser > and X.509 support library), and several parts in dirmngr (certificate > cache). Oh, and it has TLS support, while the GnuPG project is > currently working on its own TLS library ntbtls (which is based on an > old fork of PolarSSL). The maintainer is friendly and the project is > very active. It has also been audited (and continues to be audited) by > a local IT security company in a contract with the BSI (German Federal > Office for Information Security). They chose Botan after evaluating > many candidates, I hope that the documentation for the project will > eventually be released to the public, so we can all learn their reasons > and have better documentation of Botan internals. > >> The multiple-process design in GnuPG had clear justifications >> AFAIK—e.g., having ‘dirmngr’ and ‘gnupg-agent’ in separate address >> spaces makes sense from a security standpoint. Do you think these >> justifications no longer hold, or that the decisions were misguide? > > I am not per se opposed to a multi-process design, but I'd rather have > short-lived processes that are started for a single task (like > decrypting a single message) than long-running daemons. And I'd > actually use operating system features to actively isolate these > processes. This is a complicated discussion, but note that gnupg's > implementation does not protect you from attackers who gain remote code > access to any process running under your uid, so the only protection > here is against accidental memory disclosure akin to heartbleed. And > yes, heartbleed happened, so there is obviously some value to it, but so > far it is a single incident. When it comes to prioritizing concerns, > process isolation comes somewhere below memory safety, code efficiency, > refactorisation, readability, etc. So I'd argue that the "clear > justification" is not as clear as you make it sound. The GnuPG project > is bouncing between "defense in depth" and "it's game over if your uid > is compromised" without a clear threat model from which to derive a > priority of concerns. > > https://dev.gnupg.org/T1211 > >> I’m also skeptical about “better testing” bit: GnuPG and libgcrypt are >> among the first pieces of software that crypto and security researchers >> look at, and they’re also the first ones to get fixes when new attack >> scenarios are devised. > > I agree, and that would be a good reason for GnuPG to use openssl! > However, those researchers focus on the MPI multiplication in RSA, and > not on the porcelain around it. > > From a software engineering point of view: Does the current master > version pass the test suite? What is the code coverage of GnuPG's test > suite? Which compilers and platforms are tested? How often is the code > base fuzzed? Is there any static code analysis done regularly? > >> I’m sure you have a clear view on this but neopg.io doesn’t reflect> that. > > Yes, I am lagging behind in documentation. I plan to write all this > down, and much more. > > Thank you for your interest, > Marcus Powered by blists - more mailing lists Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list. Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.
OPCFW_CODE
Add script based slave VM management to Jenkins to use virtual machines as slaves This plugin adds to the Jenkins CI a way to control virtual machines through scripts. It takes two scripts: one for starting NOTE: A great deal of thanks to the authors of the vshpere cloud code was heavily copied to make this plugin. The first step is to add a new "Cloud" in the Jenkins "Configure System" menu based on "scripted Cloud". Enter its description, start and stop scripts. Start script: This is called while launching slave of this cloud. Its expected that this script ensures that VM is up and running. Configure the slave configuration on VM such that it connects to master automatically on start. For this setup slave by options: Java web start or add a startup script on slave to connect to master. Stop script: This is called after the job has finished after all post-build steps. You can pass parameters to these scripts in the text box, e.g. my_startup_script param1 param2 ... Various environment variables are passed to these scripts as mentioned next section. Select "Slave virtual computer running under scripted Cloud" while creating node. Following screen shot shows node configuration. Here is description of important inputs. The test enclosed brackets after input name is the environment variable exposed to selected scripted-cloud start/stop scripts. - scripted Cloud Instance: This is the name of the scripted Cloud that you want to use - Virtual Machine Name (SCVM_NAME) : The name of the virtual machine as it appears in scripted. - Virtual Machine Platform (SCVM_PLATFORM) : The name of the virtual machine as it appears in scripted. - Virtual Machine Group (SCVM_GROUP) : The name of the virtual machine as it appears in scripted. - Snapshot Name (SCVM_SNAPNAME) : the name of the snapshot to use. This is optional. - Force VM Launch (SCVM_FORCESTART) : Launches the virtual machine when necessary. - Extra Params (SCVM_EXTRAPARAMS): Extra inputs to start/stop scripts. E.g. in above snapshot I have passed "add_to_etc_host=sample-scripted-cloud-node". I using this in my start script to IP of newly started VM to /etc/hosts on master to refer it by name in other steps. - Disconnect after Limited Builds: Will force the slave agent to disconnect after the specified number of builds have been triggering the disconnect action. - What to do when the slave is disconnected: Action to perform (Shutdown, Revert, Reset, Nothing) when the slave is disconnected, manually or via Jenkins. - Following are environment variables set in addition to above inputs: - SCVM_ACTION : "start" while calling start script and "stop" while calling stop script - SCVM_STOPACTION : This corresponds to the value of "Availability". Values are: "shutdown", "restart", "reset" You can just check what all environment variables are available to start/stop script by adding "set" or "env" command in start/stop scripts. Environment variables set by this plugin starts with "SCVM". Scripted cloud scripts mentioned by user can use these variables in any way. Plugin just assumes that VM slave is ready and connected when start script ends. I have following basic scenarios successfully: - choosing ssh mechanism in slave launch method and shutdown as action in end - choosing java web start in slave launch method and shutdown as action in end Planning to remove following configuration : - "Disconnect after Limited Builds" : This does not matter to user much so I will remove it to avoid confusion.
OPCFW_CODE
Start-up project in developing and introducing new application for government contracts industry. Straight-forward simple pages. High number of elements and data sets. Similar to an ERP system. Seeking SW programmer to code application across multiple data tables. User friendly GUI, SQL data, Web-capable. Assist in development of Wireframe simulation of application. *NOTE* Partnership agreement in lieu of payment. I'm looking for a partner during development, release, and business growth. Because of industry, only US Citizens please. Proof of Concept complete, positive feedback. Must sign Non-Disclosure. 26 freelance font une offre moyenne de $15215 pour ce travail Hi, I have gone through the brief details mentioned on the job. I have done multiple jobs with Microsoft, Microsoft SQL Server, Programming, Software Development, Web Hosting which are the skills required to get this j Plus Hello , Please send the NDA copy to sign it. I have similar kind of expertise and work experience MS back office, SQL, Web capable ERP. I have gone through your requirement and understand that,you are looking for Plus "Techno Source Web PVT LTD" is a Leading Software development company since 2012. Hello There, Hope you are doing great! I would surely assist you in this project. If feasible, can you please provide me Proj Plus Hi, Greetings! I've read your description and understood all the requirements that you mentioned. and understood that you are looking for expert developer to develop application for government contracts industry, whic Plus Hello, Hope you are doing well! We at KrishaWeb are most preferred agency for our client due to 100% recommendation and “5 STAR” rating from our clients which we achieve through an excellent client support as well as Plus Respected Sir, Greetings of the Day ! I am Udal Bharti. I have 9+ years of competitive experience in web application development using .net and SQL server. I have 7+ years of experience in Database development & Plus Hello, I have read and understood your project I have rich experience on MsSQL server development I can do this work faster than others. If you check my profile or chat with me, you can know it well. Thanks. Hi There, Hope you doing well. We are much expertise group of iOS, Android, Website developers and Designers. We will develop your platform as you need. Let’s discuss in detail then will start work on it. Thanks. Hi, I am very interested to work on your project. I have 17+ years experience in Web Development using a large variety of programming languages, frameworks, database architecture, APIs, CRMs, ERPs, mobile development Plus Hi Greetings of the Day, I am interested in this task, I am full time freelancer carrying 15 yrs of experience in Web and Windows development and Mobile development using ASP.Net, C#, SQL server, Xamarin, WebAPI, VB6 Plus Hi, TruthSW! I read the description of your project thoroughly. I understand your requirements initially and I have experiences of the field. I am a specialist of: * React.js, Angular and Vue.js for Front-end, * .N Plus hello,dear. I have read all your requirements for 'Application Developer, MS back office, SQL, Web capable' and I fully understood it. I've already done this kind of project before. I am confident and I am sure that I Plus Hello, Have a good day! I checked your requirements carefully. I have all the skills relevant to your project. I have been in this domain for the past 7 years and I have completed many projects successfully. I assure Plus I have worked on a similar project for a company called G4i so I think I am familiar with what you need. Relevant Skills and Experience Im a .Net Developer and SQL expert. I can design the database and develop this we Plus Hello,sir. I have read your post and noticed that you were looking for someone who can do your job - 'Application Developer, MS back office, SQL, Web capable' I can show you my work history, if you want. I am a profe Plus Dear sir, I am senior software engineer in sri lanka with skill of PHP, Laravel, Asp.net , C#, I would like to complete your project as your needs. Please contact me for more details Thanks +13 years' experience developing ERPs 100% satisfaction in Barcelona (Spain) I just arrived to Freelancer I won't request any payment until you're happy with the result. How are you? I'm a senior Mobile developer. I'm working as an independent freelancer so I can work full-time. I have experience of iOS / Android development for 8 years. I always write scalable, performance-wise an Plus Hi, I'm not in US but interested in Partnership if you give the opportunity. Myself Urvish Suthar, I'm full time freelancer and have total 15+ years of experience in software development, during the journey I had wor Plus Hello, Please consider our proposal and initiate the chat so that we can discuss further and provide you the best solution with exact time frame and quote. I will provide all features details to you on chat. VAPSYS T Plus
OPCFW_CODE
C++ MongoClient index optimization for mass data bulk inserts I am developing an application which inserts data into MongoDB in high frequency (thousands of documents sub-second). As such, index and storage space optimization is key. So, before inserting the first record (collection names are dynamic), I would like to do the following with the C++ driver: switch off the autoindex on _id (I have a subdoc as the _id field), no clue how to do it with the C++ driver ensure one special index, this works with conn.ensureIndex(coll, mongo::fromjson("{'_id.o':1}")); set the index to background (no clue how to do it with the C++ driver) set padding to zero (documents are never updated again) no clue how to do it with the C++ driver My insert command is then conn.insert(coll, vec); which obviously works fine for any number of vector elements. Help is greatly appreciated! 1 of 3 solved: conn.ensureIndex(coll, mongo::fromjson("{'_id.o':1}"), false, "", true, true) does the background index, 1,5 of 3 solved? With padding it seems I have to run another thread over completed collections and compact them..., still the most important question open, how to ged rid of the autoindex on _id? I'm not sure I understand why you're removing the _id index and replacing it with another index, yet still setting the _id field. Apparently, you can disable the _id for a collection if needed, by extending the method createCollection from the DbClientWithCommands (documentation) class. Of course, you need to also make sure that an _id is not automatically inserted by the driver automatically (many drivers to this, so for some people, this will still be an issue). The current driver method ensureIndex has a background parameter you can provide (documentation) I'm not aware of any way to programmatically control the padding. It's automatically determined by MongoDB over time for a collection. If you aren't modifying documents, I'd expect it to near be near 1 (meaning there's no padding). Check the stats to be sure. For creating a collection without an _id and using autoIndexId, you'd need to create a new function, as the built-in one currently does, you'd need to copy the code as mentioned above and do this: bool MyClass::createCollection(const string &ns, long long size, bool capped, int max, bool disableAutoIndexId, BSONObj *info) { verify(!capped||size); BSONObj o; if ( info == 0 ) info = &o; BSONObjBuilder b; string db = nsToDatabase(ns); b.append("create", ns.c_str() + db.length() + 1); if ( size ) b.append("size", size); if ( capped ) b.append("capped", true); if ( max ) b.append("max", max); if ( disableAutoIndexId ) b.append("autoIndexId", false); return runCommand(db.c_str(), b.done(), *info); } Thanks for the answer. On 1., you may correct your knowledge ;), it is possible for any collection if you specify {autoIndexId:false} in the createCollection call, the question is only, how does this work with the C++ createCollection() flavor... With 2. you are right, I found the parameter after looking into the parameter list :) 3. is still very important. Our tests have shown that MongoDB assigns between 1 and 3! times the storage size for collections before compacting them. No, #1 still only applies to capped collections (http://docs.mongodb.org/manual/reference/method/db.createCollection/). While #3 might be important, I don't believe there's a way to specify it. MongoDb determines it statistically at runtime. Why do you think it's critical? Overtime, it should drop to 1. The correct answer is: According to the currently available documentation of MongoDB it might be read as it would only be possible for capped collections. In reality, it IS possible for regular collections and I just found the way with the C++ driver: BSONBUILDER cmd; BSONOBJ retobj; cmd.append("create", collname); cmd.append("autoIndexId", false); BOOL res = conn.runCommand(LIT_DB_NAME, cmd.obj(), retobj); My defines above. "BSONBUILDER" is equivalent mongo::BSONObjBuilder and BSONOBJ equals mongo::BSONObj. My colleague has just found that there are two contradictionary pieces of doc available. One says only for capped, http://docs.mongodb.org/manual/reference/command/create/ says it not And on the #3 topic: I create collections with 50,000+ docs each 1, 5, 10 minutes etc. If some of them just eat up two or three times the storage they really require, that's not a good thing if you have thousands of collections finally - and they are not supposed to be updated or extended in any fashion. I think background-compacting is an approach. Edited my answer to reflect the new knowledge you pointed to. I had tried the autoIndexId parameter and was fooled by the way the console worked, as it still insisted on inserting an _id by default. Also -- maybe you shouldn't have thousands of collections? The extra disk I/O for background compacting may have been avoided by simply adding an indexed field to a larger collection and allowing the padding to naturally approach 1. Just to finally clarify (all works fine now): WiredPrairie (thank you for supplying proper source code for the createCollection() replacement!) asked the question why removing the autoindex on _id. It is for memory consumption reasons. No index, no btree. I simply don't need this index, while my compound _id field still delivers uniqueness (It's an object ID from another collection plus the timestamp). Queries are never executed against this combination, but on the ObjectID only. Therefore I have an index solely on my _id.o field. Great! I wouldn't mind if you voted up/accepted the answer. :)
STACK_EXCHANGE
How to identify make/model of a wardrobe? We moved into a new flat which was sold to us with some wardrobes. We thought: "Cool! Less furniture to buy." After some time, we realised we wanted an extra shelf here, an extra hanger there, etc. Sound simple: Just go to the local furniture shop (e.g., IKEA, Jysk, Mio, whatever) and buy the furniture component you need. Soon we realised that furniture is like phone charger plugs: Even for a simple item like a shelf, there are endless variations of width, depth, side hole placement, which makes it impossible to mix furniture components, even coming from the same make, unless they are part of the exact same system. We could just dump all wardrobes and start anew, but that would be super-wasteful. Is there a way to identify what make/model a wardrobe is? Ikea usually has very distinctive structure, and usually "Ikea" stamped somewhere. I don't have much experience with Jysk or Mio, but as far as I remember their furniture is more of the generic type. BUT even for Ikea this is not that simple, they don't sell single shelves and their systems tend to slightly change over time. Hangers are simple enough, you could probably find something generic in Krauta or bauhaus. Shelves are a little more challenging but you can always use generic shelf brackets and cut a piece of wood in with same color and thickness. I'm not aware of a way, other than if it has a label, or the assembly instructions were with it.....there's a reason the instructions say to keep in a safe place....but as for shelves, as far as I am aware, shelf pins come in 2 or 3 sizes. Take one of your shelves out now, pull the holding pin out, go to your store of choice and match the pin you have to what's available. For the shelf itself, generally you buy the depth you want, in a length close to what you need, and trim the end to fit. I assume that even Ikea would use common sized shelf pins, but my closest Ikea is a 6hr drive roundtrip. But I know that Ace Hardware, BigLots, Wal-Mart, Home Depot, Lowes, K-Mart, etc, all have these self assemble cabinets, cubes, closets, wardbrobes, etc. All are made in China, or use components from China, but use standard shelf pins. Adding a shelf to one that didn't have enough was a matter of buying the extra shelf material and cutting to size. Now for the shelf, you could have variations in the thickness of the shelf. Generally, the replacement components are thicker than what ships with the original unit. Not usually a big deal unless you want uniformity, in which case you could spread the shelves so they don't look to be different thicknesses.
STACK_EXCHANGE
XML node reading I have an xml file as seen in the below; <ZPPORDER01> <IDOC BEGIN="1"> <EDI_DC40 SEGMENT="1"> <TABNAM>EDI_DC40</TABNAM> <MANDT>100</MANDT> <DOCNUM>0000000000000001</DOCNUM> </EDI_DC40> <Z1PPORDITEM SEGMENT="1"> <AUFNR>000000000123</AUFNR> <POSNR>0001</POSNR> <Z1PPORDOPER SEGMENT="1"> <VORNR>0010</VORNR> <ARBPL>PIGME</ARBPL> <Z1PPORDCOMP SEGMENT="1"> <POSNR>0100</POSNR> <CPARAM>RV ;</CPARAM> </Z1PPORDCOMP> <Z1PPORDCOMP SEGMENT="1"> <POSNR>0200</POSNR> <CPARAM>PLT;</CPARAM> </Z1PPORDCOMP> </Z1PPORDOPER> </Z1PPORDITEM> </IDOC> </ZPPORDER01> I would like to read PosNR and CPARAM for each <Z1PPORDCOMP SEGMENT="1"> node. As seen there are multiple nodes with the same node name(Z1PPORDCOMP SEGMENT="1" and same child names(POSNR,CPARAM). I would like to read all child's inner text and assign to 4 different strings in a scan. I wrote a script...I have defined 4 strings to read the data for each child. I can read the value from first two, but I don't know how to get data from the next child values. I have searched Xml.XPath is used but I couldn't figure out how to use it in this case. dim doc as System.Xml.XmlDocument; dim node as System.Xml.XmlNode; doc = new System.Xml.XmlDocument; doc.Load("\\mypc\ShareOn\INPUT\Test.xml"); dim PosNR0100 as string ; dim PosNR0100_CPARAM as string ; dim PosNR0200 as string ; dimPosNR0200_CPARAM as string ; PosNR0100 = doc.SelectSingleNode("/ZPPORDER01/IDOC [@BEGIN='1']/Z1PPORDITEM [@SEGMENT='1']/Z1PPORDOPER [@SEGMENT='1']/Z1PPORDCOMP [@SEGMENT='1']/POSNR").InnerText; PosNR0100_CPARAM = doc.SelectSingleNode("/ZPPORDER01/IDOC [@BEGIN='1']/Z1PPORDITEM [@SEGMENT='1']/Z1PPORDOPER [@SEGMENT='1']/Z1PPORDCOMP [@SEGMENT='1']/CPARAM").InnerText; PosNR0200 = ? PosNR0200_CPARAM = ? Have you considered using an XML library? You can use xpath queries like this: Imports System.Xml Module Module1 Sub Main() Dim src = "C:\temp\ZPPORDER.xml" Dim doc As New XmlDocument doc.Load(src) Dim posnrs As New List(Of String) Dim cparams As New List(Of String) Dim z1ppordcomps = doc.SelectNodes("//Z1PPORDCOMP[@SEGMENT='1']") For Each n As XmlNode In z1ppordcomps Dim posnr = n.SelectSingleNode("POSNR") Dim cparam = n.SelectSingleNode("CPARAM") If posnr IsNot Nothing AndAlso cparam IsNot Nothing Then posnrs.Add(posnr.InnerText) cparams.Add(cparam.InnerText) End If Next Console.WriteLine("POSNR: " & String.Join(",", posnrs)) Console.WriteLine("CPARAM: " & String.Join(", ", cparams)) Console.ReadLine() End Sub End Module Outputs: POSNR: 0100,0200 CPARAM: RV ;, PLT; You could access them individually with posnrs(0), posnrs(1), cparams(0), and cparams(1). Use XML Serialization Create classes to represent your data Imports System.Xml.Serialization Imports System.IO Public Class ZPPORDER01 Public Property IDOC As IDOC End Class Public Class IDOC Public Property EDI_DC40 As EDI_DC40 Public Property Z1PPORDITEM As Z1PPORDITEM End Class Public Class EDI_DC40 Public Property TABNAM As String Public Property MANDT As String Public Property DOCNUM As String End Class Public Class Z1PPORDITEM <XmlAttribute> Public Property SEGMENT As Integer Public Property AUFNR As String Public Property POSNR As String Public Property Z1PPORDOPER As Z1PPORDOPER End Class Public Class Z1PPORDOPER <XmlAttribute> Public Property SEGMENT As Integer Public Property VORNR As String Public Property ARBPL As String <XmlElement("Z1PPORDCOMP")> Public Property Z1PPORDCOMPs As List(Of Z1PPORDCOMP) End Class Public Class Z1PPORDCOMP <XmlAttribute> Public Property SEGMENT As Integer Public Property POSNR As String Public Property CPARAM As String End Class Then deserialize Dim s As New XmlSerializer(GetType(ZPPORDER01)) Dim z As ZPPORDER01 Using sr As New StreamReader("filename.xml") z = DirectCast(s.Deserialize(sr), ZPPORDER01) End Using For Each comp In z.IDOC.Z1PPORDITEM.Z1PPORDOPER.Z1PPORDCOMPs MessageBox.Show($"POSNR: {comp.POSNR}, CPARAM: {comp.CPARAM}") Next
STACK_EXCHANGE
$.fn.botany = function () { //Error checking if (!(this.hasClass("botany") && this.prop("tagName") == "UL")) throw new Error("The target of $().botany must be a <ul> element with the class .botany"); //Whenever any tree element is clicked, bubble up to find the nearest node, and close/open it $(this).on('click', 'li, ul, .botany-open, .botany-closed', function (e) { //Find the closest node var clicked = $(e.target).closest("li"); //Find its list of children (<ul>) var childList = $(clicked).children("ul"); if (clicked.hasClass("open")) { //If there are subnodes, then run the slideUp animation before removing the open class if (childList.children().length > 0) childList.slideUp(function () { clicked.removeClass("open"); }); //Otherwise just remove the class straight away (so there is no delay due to animation) else clicked.removeClass("open"); } else { //If there are subnodes, add the class before sliding down so that jQuery can work out its //height correctly, but hide it so this isn't visible to the user if (childList.children().length > 0) { childList.hide(); clicked.addClass("open"); childList.slideDown(); } //If there aren't subnodes, just remove the class else clicked.addClass("open"); } e.stopPropagation(); }); };
STACK_EDU
Creating a report using both project and MDX cube data You can create reports that use both attributes, metrics, and other report objects from a MicroStrategy project, and metrics from an MDX cube. An MDX cube is a set of data retrieved from an MDX cube source. MDX cube sources can be imported into MicroStrategy and mapped to various objects to allow queries, reporting, and analysis on the data. For a detailed explanation of MDX cubes, including steps to connect to an MDX cube source and integrate MDX cubes into MicroStrategy, see the MDX Cube Reporting Guide. create an MDX cube report in Web, you need to have Web Professional privileges, including the Web Define MDX Cube Report privilege. least one MDX cube must be imported into your MicroStrategy project. Importing MDX cubes is often handled by a MicroStrategy architect. For more information on importing MDX cubes into MicroStrategy, see the MDX Cube Reporting Guide. include MDX cube data in standard reports, you must map MDX cube columns for each project attribute you plan to include in the reports. For example, if a report includes the project attributes Year, Region, and Category, you must map MDX cube columns to these three attributes. For steps to map MDX cube columns to project attributes, see the MDX Cube Reporting Guide. To create a report using both report objects from a MicroStrategy project and MDX cube data ||Log in to the project in which you want to create a report. ||Click Create on any page, point to New Report, and select Blank Report. You can also select an existing template on which to build your new report. For steps to use an existing template to create a new report, see Creating a report based on an existing template. ||From the left, click All Objects, then navigate to the objects you want to place on ||The location in which you begin browsing for objects is defined in the Report Options dialog box in MicroStrategy Developer. For more information on the Report Options dialog box, see the MicroStrategy Developer help (formerly the MicroStrategy Desktop help). ||Add attributes, metrics, filters, and prompts to your new report, as follows: an attribute to the report, drag and drop the attribute from the All Objects pane onto the report. Attributes are generally placed on the rows of a report. The attribute must be mapped to data for the MDX cube you plan to report on. For steps to map MDX cube columns to project attributes, see the MDX Cube Reporting Guide. a metric to the report, drag and drop the metric from the All Objects pane onto the report. Metrics are generally placed on the columns of a report. screens data in your data source to determine whether the data should be included in or excluded from the calculations of the report results. You can create and add a stand-alone filter to a report, or create a filter directly in the report. For steps to create a filter directly within the report, see Creating a filter within a report: Embedded filters. For steps to create a stand-alone filter and then add it to the report, see About filters to determine the type of filter to create and links to steps for creating your filter. is a question the system presents to a user when a report is executed. You can add a prompt to a report to determine what data is displayed on the report based on how the user answers the question. To add a prompt to the report, drag and drop the prompt from the All Objects pane onto the report. You need to know what the type of prompt when deciding where and how to add it to a report. For example, Object prompts are most commonly placed directly on a report, but can also be placed in the condition part of a metric's definition in the Metric Editor, depending on the type of object in the Object prompt. For a table listing where to place different types of prompts, see Adding a prompt to a report. ||From the left, click MDX Objects. A list of available MDX cube sources is displayed. ||Click on the links for the MDX cube sources to navigate to an MDX cube, then click the Metrics folder to view the metrics for an MDX cube. ||Drag and drop the MDX cube metrics you want to add to the desired location on the grid. ||Click the Run Report icon at the top of the page. You can view the report in Grid, Graph, or Grid and Graph view. If you want to move objects or format the report differently, return to Design Mode and make your changes. ||To save your new report, from the Home menu, select Save. The Save As dialog box opens. ||Navigate to the location in which you want to save your report, then type a name and description for the report in the Name and Description fields and click OK. Your report is saved.
OPCFW_CODE
EMM – Introduction The CaseView core system provides generic support for devices which present an SNMP interface. However, in order to extend the support for network devices, CaseView provides additional Entity Manager Modules (EMMs) which are specific to particular devices and which extend the monitoring and management control of the devices. This page describes the Case Communications Viper Router EMM. It handles the following variants of Viper: Basic Unit with E1 Interface E & M Board 4 port FXS / FXO board Single port Primary Rate ISDN Board Dual port Basic Rate ISDN card Dual Port X.21 trunk Interface Dual Port V.35 trunk Interface Launching the EMM Launching the EMM is extremely straightforward. All you have to do is double click on the icon on the map which represents the Viper you wish to manage. The EMM presents a window which shows a representation of the device – both front and back view. It also draws onto these views the current state of the SNMP-monitorable interfaces (e.g. links, protocols) that the Viper presents to the rest of the network. This information is continually refreshed at a constant interval – typically every ten seconds. The screenshot below shows the appearance of the Viper EMM screen. In this screenshot it can be seen that the ethernet and WAN links are up but no BRI ISDN connections are established. The About menu provides information about the EMM version in use and about the information reported by the particular Viper in question. The Agent menu allows you to inspect, chart and graph the MIB-II SNMP variables that the Viper supports (an introduction to MIB-II is provided elsewhere in the documentation pack). The Viper provides access to the following tables (the numbers in brackets are the MIB Object IDs of the tables): System Table (188.8.131.52.2.1.1). Internet Protocol Address Table (184.108.40.206.220.127.116.11). Internet Protocol Routing Table (18.104.22.168.22.214.171.124). Internet Protocol ARP Table (126.96.36.199.188.8.131.52). ICMP Table (184.108.40.206.2.1.5). SNMP Table (220.127.116.11.2.1.11).The example below shows the System Table Note that the Interface Table (18.104.22.168.2.1.2) is not available on this menu but it is available in great detail from the Ports and Port menus. The Tools menu allows you to perform miscellaneous operations on the device: Make a telnet connection to the Viper manager. Make a Web browser connection to the Viper manager. Load configuration data to the device. You will be prompted to specify the address of the TFTP server to use and a file in which the configuration data has previously been stored. Dump configuration data from the device. You will be prompted to specify the address of the TFTP server to use and a file in which the configuration data will be stored. Restart the Viper. Show status and configuration information. Store and clear (in the database) access information – the password for the Switch Manager. If you choose to store a password in the database, you will not be prompted for one each time you want to access the Switch Manager – the stored password will be used to try the logon. However, if you choose not to store a password in the database, you will be prompted for one each time you want to access the Switch Manager (even if the password is blank). Note that these operations simply affect the password stored in the database – they do not affect the real password stored in the Switch itself. The Port menu allows you to inspect, chart and graph SNMP variables from the Interface Table (22.214.171.124.2.1.2). It displays the interface entry for all the SNMP-monitorable interfaces on the device (as displayed on the graphic device display). The options on the menu allow you to use different views into the table: Full View (this displays every variable in the entry) Info View (this displays a selection of the more important variables) Usage (BPS) View (this displays the port usage variables) Utilisation (%) View (this displays the port utilisation variables)The example below shows the Info View. Viewing/ Changing Data Some of the EMM menus allow you to inspect, chart and graph SNMP variables. Exactly how to perform these very powerful operations is described in the Getting Started documentation. The example below shows a graph of Port Utilisation data. In general, the Viper provides read-only access to its SNMP tables. However, the following entries of the System Table can also be modified: For more information please contact Case Communications
OPCFW_CODE
Configuration of React app, .NET Core 3.1 API, and calls to Microsoft Graph Is there a "best" way of achieving this? Basically I want to leverage my company's Azure AD tenant to build a fully featured internal application. Using Microsoft Graph, I can retrieve users via their identifier guids, and use the identifiers as foreign keys for various tables in our on premises database, instead of having a dedicated User table, which would need to be populated and synced up with the AD. There are many other prospective uses for Graph, but leveraging users is the priority right now. A large chunk of my application is built already. I am able to lock down my client app using the package react-aad-msal, requiring users to authenticate through single-sign-on. I have also successfully been able to pass that token back to the protected .NET Core API, accessing various endpoints as the authenticated user. From here, I am not sure how I can develop the calls to Microsoft Graph. At which point should I make the connection? Should the client application connect to both the on-prem API, as well as Graph? Or should it only connect to the on-prem, which would then connect to Graph? Curious to know the pros and cons of either method. I've also heard tell that Microsoft is working on their own package: @azure/msal-react, and that react-aad-msal should no longer be used (as it only supports msal 1.0 and not 2.0. I have no idea which version is better for my needs). While msal-react is still in development, apparently I should be using @azure/msal-browser. But I cannot find a good example of a react app using msal-browser to authenticate. Here is a Sample on how to use MSAL with React to call Microsoft Graph. The only different in your case will be that instead of calling Microsoft Graph, you will call your own API. Bottomline is - there is no direct integration package yet for react. Which can also be read from the official statement on the msal-js repo: After our current libraries are up to standards, we will begin balancing new feature requests, with new platforms such as react and node.js. You can also use .net core instead. Please go through the sample here which can help. Thanks kindly for your response. Is there any word yet on when msal-react will be available? I have already looked at the sample for msal-browser and to be honest, it's very difficult to understand what's happening in the code - even when following the tutorial video. I'm surprised that no state management system is used, and more examples would be very helpful. For instance, I would like to use an automatic popup or redirect when it is detected that the user is not authenticated, rather than the button showing up. HI @apriestley, There is no roadmap available with us currently but it will be updated in the website.One suggestion is to use .net core for the purpose of authentication as it is more secure and more document support is available.If we face any issue please let us know by asking a question in Stackoverflow.We are always available to help you. Hi Hari. My .net core endpoints are currently protected behind AzureAd, just as the client application is (I prefer to protect both). The msal roadmap shows that the private preview for msal-react is expected to be available this month. Is it possible for me to sign up? Hi @apriestley,The private preview is currently available.
STACK_EXCHANGE
I never felt comfortable using a computer with the OS and software preinstalled by the manufacturer, and the Dell Mini is no exception. That's not because I'm snobbish and always want to be different. It's just that I want to have full control of the software I'm using. And I don't like crapware: - You get an idea of what I'm talking about when looking at screenshots from my previous post. Dell sold the privilege of being the standard search engine in Firefox to Yahoo, and in turn they had to include all kind of Yahoo stuff which I neither need nor like (and they had to call Firefox “web browser” 😏). - What you don't see on the screenshot: The “web browser” comes with a preinstalled Yahoo toolbar, which you can't deinstall from the add-ons menu as other extensions (I accidentally discovered the toolbar in synaptic, allowing me eventually to remove it). - As a matter if fact: the entire installation is, in a certain sense, proprietary. It is compiled for the lpia (low power on Intel architecture), and while this architecture is binary compatible to i386, you always have to use the force-architecture switch when installing a standard x86 package. - What's more, the Dell Mini even has its own repositories. You can't simply upgrade to Ubuntu 8.10 online by using, say, the update manager, because there is no 8.10 for the Dell Mini. - An upgrade is also made difficult by the fact that Dell did not partition the 8 GB SSD of the mini. Formatting the partition thus means losing all configuration files in - And finally, at least some minis suffer from sudden freezes which are believed to be related to the WLAN connection. I've experienced these system lock-ups too. They are truly severe: only the magic sysrq saved me from having to power-off the mini. My attempts to replace the propietary wl driver (which I suspect to be the culprit) of the Broadcom WLAN chip with the open bc43x driver were, by the way, unsuccessful. All in all quite many reasons for a clean reinstall, wouldn't you say so? 😉 Since the mini doesn't have an optical drive, I had to prepare a bootable USB stick. Ubuntu comes with a package called usb-creator for doing just that. Prior to “burning” the iso image to the stick, it proved to be necessary to prepare it by issuing: dd if=/dev/zero of=/dev/sdb bs=512 count=1; blockdev --rereadpt /dev/sdb; usb-creator This command erases any previous content on the stick and rereads the partition table. Usb-creator then offers to format the stick and to write the image. What followed was an uneventful installation of Ubuntu 8.10. Of course, I created a separate /home for which I reserved 2 GB. Since space is an issue, I checked the size of the installed packages with the following little script which imitates the function of John Walkers perl script rpmhogs on a dpkg based system: 1 2 3 4 Or, alternatively (thanks to haui for this rather unexpected discovery): 1 2 3 Call this script with dpkghogs | head -n 20, and you'll get a top 20 of the fattest packages installed on your system. 😄 In my case, the biggest ist openoffice.org-core with 103 MB, followed by evolution-common with 93 (hmmm... I'll install claws instead) and the linux kernel (2.6.27) with 90 MB. Now, guess what the biggest package on Dell's original installation was. You'll never figure it out! Acrobat Reader with 120 MB. 😮 Here are two screenshots to give you a visual impression. I use the Dust theme and the Meliae-Dust icons, which go well together. The right screenshot shows firefox with the dustfox theme on a very well-known blog. 😉 As you see, there's still plenty of space on the screen. Finally, I've found to both my great surprise and pleasure that the system feels much faster than I thought. There are, I think, three factors which are crucial for this impression: first, the SSD reads data as fast as 30 MB/s, second, the Atom CPU handles hyperthreading, and third, the integrated Intel GMA950 is fast enough (glxgears shows 550 fps, sufficient for simple 3D shooters such as AssaultCube and OpenArena) to run Compiz, which in turns keeps the load on the CPU low when shifting Windows around. The system just feels very responsive, much more so than I expected. Oh, and another good point: the battery life exceeds five hours. I had hoped for merely four. 😊
OPCFW_CODE
Graph::Maker::ExcessConfigurations - create ExcessConfigurations graph use Graph::Maker::ExcessConfigurations; $graph = Graph::Maker->new ('excess_configurations', N => 7); Graph.pm graphs of the evolution of multigraph excess "configuration"s in the manner of Svante Janson, Donald E. Knuth, Tomasz Luczak, Boris Pittel, "Birth of the Giant Component", Random Structures and Algorithms, volume 4, 1993, pages 233-358. https://arxiv.org/abs/math/9310236, https://onlinelibrary.wiley.com/doi/abs/10.1002/rsa.3240040303 /---> 0,1 ---> 1,1 ( cross connections 0 ---> 1 X 0,1 and 2 \---> 2 ---> 0,0,1 -> 1,1 and 0,0,1 ) \ ----> 3 The "excess" of a connected component of a multigraph is excess = number of edges - number of vertices = -1 for tree (acyclic), 0 for unicyclic, ... Janson et al take the configuration of a multigraph to be the number of components of excess r = 1, 2, 3, etc. [count r=1, count r=2, ...] vertex name such as "1,0,2" In the ExcessConfigurations graph, each vertex is such a configuration. An edge is to a new configuration obtained by adding one edge to the multigraph. Such an edge can increase the excess of an existing component, including raising an r=0 to a new r=1. Or it can join two components r,s together for new combined excess r+s+1. r=0 components doesn't appear in the configuration but are taken to be in unlimited supply. Janson et al note this is a partial ordering of configurations since total excess increases by 1 each time. Total excess is total of the r>=0 components, so not including tree components. total excess = r1 + 2*r2 + 3*r3 + ... Parameter N here is how many evolution steps, so configurations of total excess 0 through N inclusive, and the edges between them. Total excess is sum of component excesses, so those component excesses are a partition of the total. The number of vertices is thus sum over t=total, num vertices = sum(t=0,N, NumPartitions(t)) = 1, 2, 4, 7, 12, 19, ... (A000070) As a note on terminology, excess r=0 described above is a unicyclic component, meaning it has one cycle and possible acyclic branches hanging off. Janson et al call r=1 bi-cyclic (and r=2 tri-cyclic, etc). The cases for r=1 are, up to paths or vertex insertions (so "reduced multigraphs"), _ _ __ __ __ / \ / \ | | | | / \ excess r=1 | A | | A----B | A----B (Janson et al \_/ \_/ |__| |__| \__/ equation 9.15) separate loops three paths The separate loops are clearly 2 cycles. The three paths case would be 3 cycles if all combinations were allowed. Excess r=1 as bi-cyclic is understood as successive cycles using at least one previously unused edge, the way edges are added to the multigraph in forming a cycle. $graph = Graph::Maker->new('excess_configurations', key => value, ...) The key/value parameters are N => integer, number of steps graph_maker => subr(key=>value) constructor, default Graph->new Other parameters are passed to the constructor, either the If the graph is directed (the default) then edges are in the add-an-edge direction between configurations. House of Graphs entries for the trees here include, 1310 N=0, singleton 19655 N=1, path-2 500 N=2, star-4 claw 33585 N=3 33587 N=4 Entries in Sloane's Online Encyclopedia of Integer Sequences related to this tree include A000070 num vertices, cumulative num partitions A029859 num edges, partitions choose 0,1,2 terms A000041 num successorless, partitions of N Copyright 2018, 2019 Kevin Ryde This file is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. This file is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with This file. If not, see http://www.gnu.org/licenses/.
OPCFW_CODE
I have heard myself say these words many times, and yet it always comes as a surprise, surely people know that deleting data is a bad thing, apparently not. For the sake of this example I will talk about Purchase Orders (POs) but you can replace this with Home Loan Application, Insurance Claim, Invoice, Client Record, etc. Of course, with my Data, Info and Analytics glasses on, I would never dream of deleting data, I mean how can I answer questions around voided Purchase Order for heavens sake? that is assuming that someone would ever ask such a pointless question. Our frontend friends often don’t have the same reservations, I mean if the Purchase Order was only ever in a Draft Status, who cares if they physically Delete it, to them it never ‘really’ existed anyway!! And so these are the two paradigms, an analytics system that keeps copies of everything, just in case, and a source system that wants to get rid of the junk so it can remain lean and performant. So why is this a constant surprise, surely I have learnt my lessons by now? Well the issue is I do always ask, ‘so any deletes in the source?’ to which I am invariably told, ‘No, no, no!!! we do soft deletes’. The nuances are: - Soft deletes are only done after the Purchase order is Approved, before that they can be deleted without a hesitation to get rid of the ‘junk’. - Complex systems like CRM or ERP will have many developers working to customise the solution for a specific client. Some of the developers may not have got the email about physical deletes being bad news. - So source system vendors physically move the record elsewhere and refer to it as ‘archiving’, or may only introduce archiving several months after going live. So let’s imagine we are about to embark of a new project, what do we do? I suggest you: - At the kick off the meeting, share with your client the risks that source system deletes do sometimes happen. Even if they tell you it never happens, at least you’ve raised the issue. - Suggest that the testing team, assuming you are lucky enough to have one, look out for this scenario. They should have a test case at least. - Push for the inclusion of a reconciliation feature for whatever you are building. It will make sure that target doesn’t diverge from the source system. It doesn’t need to be fancy, it can be a weekly check of row counts and totals that give IT and the business confidence - Be prepared that the test data you use during development will not have the variety of changes you can expect in production when you have gone live. This means you may only experience deletes in BAU\DevOps mode so look out for them!! - If you have access to a Change Data Capture (CDC) tool then think about using it as it will capture deletes, however, this comes with an overhead. You will get a whole load of noise and you will need to filter out inserts/updates if you don’t care about them. - If there is a choice between a traditional RDBMS like SQL Server and a cloud native DB like SnowFlake, push for SnowFlake, or similar, as they have CDC features out of the box. Ok, so what to do when you don’t have CDC, the client and source system vendor have told you physical deletes don’t happen but you get a service ticket raised to tell you that a report still contains a purchase order that no longer exists in the source? - If deletes are only being done of POs in a draft state, can we exclude them from the pipeline and stop the data ever being loaded in into the analytics repository? - Does the Fact, Table or Dim, that contains the deleted record identified in the service ticket, actually need history? I know, heresy, but for simple Data Mart solutions just go and get all the current POs. - Do the deletes happen within a time window (i.e. an archive process that runs every month)? If this is the case delete and reload all POs created in the last months. - If the data is too large, use a pipeline that runs nightly upserts of the Dim/Fact/Table and a separate one to carry out a weekly bulk reload of all POs. - Does your database have inherent functionality to merge changes I.e INSERT, UPDATE and DELETE like SQL Server? If so this is an easy option but you will need to stage a complete set of the Post before merging them. - If you have a lot of data, and want to retain history, use a pipeline that runs nightly upserts of the Dim/Fact/Table and a separate one to carry out a nightly bulk reload of all PO business keys into a staging area (e.g. stg.PO_BK). The staged table will be quick to load as it will be very narrow and then can be used to identify all POs that exist in the Dim/Fact/Table that are no longer present in the stage table (i.e. st.PO_BK). Once you have identified the rogue POs, do a soft delete in the appropriate Dim/Fact/Table. It can be hard, if not impossible, to identify upfront this type of issue, often you will not be able to get evidence of this issue until the solution has been running for weeks, if not months. And please don’t suggest it is a data quality issue, it’s not, as analytical types we simply care about different things to the source system guys. Being prepared for deletes is the secret with an integration pattern that can be quickly applied should get you out of trouble, hopefully this post has provided some options. Image courtesy of pixabay.com
OPCFW_CODE
var anchors = [ [250, 'Max daily Covid19 deaths in LA'], [2200, 'Number of students at Annenberg USC'], [2977, 'Number of victims in 9/11'], [3352, 'Max daily Covid19 deaths in the US'], [23872, 'Total Covid19 deaths in LA till date'], [25771, 'Population of University Park, LA'], [44000, 'Number of USC students'], [60000, 'Annual deaths in LA county (2019 data)'], [61661, 'Total Covid19 deaths in CA till date'], [78467, 'Capacity of LA Coliseum'], [88000, 'Number of USC & UCLA students'] ] var data; $.getJSON('https://www.mohfw.gov.in/data/datanew.json', function(temp) { console.log(temp) data = {}; for (const value of Object.values(temp)) { //console.log(value) data[value['state_name']] = value['new_death'] - value['death'] } data['Jammu and Kashmir'] = data['Jammu and Kashmir'] + data['Ladakh'] data['India'] = data[''] }); $("path, circle").mouseenter(function(e) { $('#hover-box').css('visibility','visible'); //$('#hover-box').html($(this).data('info')); $('#hover-box').html($(this).attr('name')); }); $("#how").click(function (e){ //console.log("instructions") $("#choose").toggle() }); $("path, circle").click(function(e) { $('#choose').css('display','none') id = $(this).attr('id') name = $(this).attr('name') img = 'img/'+id+'.jpg' console.log(`${name}: ${data[name]}`); console.log(`${img}`) //console.log(`${(data[name]*25/2200).toPrecision(4)}`) $('#img').attr('src',img); html = "<b>State</b>: " + name html = html + " ("+id+")" est = data[name]*25 html = html + "<br><b>Deaths</b>: " + data[name] + " official; " + est + " estimated." //html = html + "<br>" + (data[name]*est*100/44000).toPrecision(2) + "% of all USC students.<br>" + (data[name]*est*100/2200).toPrecision(2) + "% of all Annenberg students.<br>" + (data[name]*est*100/8600).toPrecision(2) + "% of all Viterbi students." $('#text').html(html) var index = 0; for (const [idx, element] of anchors.entries()) { //console.log(index, element); if (est < element[0]) {index=idx; break;} } console.log(index, anchors[index]) v_big = anchors[index][0] t_big = anchors[index][1] + `: ${v_big}` v_sml = anchors[index-1][0] t_sml = anchors[index-1][1] + `: ${v_sml}` // what if index is 0? $('.circle1').attr('r','10vh') $('.circle1').attr('cx','12vh') $('.circle1').attr('cy','12vh') console.log(`text1: ${t_big}`) $('.text1').html(t_big) r2 = est*10/v_big console.log(`r2: ${r2}`) $('.circle2').attr('r', r2+'vh') $('.circle2').attr('cx','12vh') $('.circle2').attr('cy', 22-r2+'vh') $('.text2').html(`Estimated Deaths in ${name} yesterday: ${est}`) r3 = v_sml*10/v_big console.log(`r3: ${r3}`) $('.circle3').attr('r', r3+'vh') $('.circle3').attr('cx','12vh') $('.circle3').attr('cy', 22-r3+'vh') console.log(`text3: ${t_sml}`) $('.text3').html(t_sml) //$('#info-box').html(data[name]) //$('#info-box').css('visibility','visible') //$('#info-box').children().css('visibility','visible') // State //$('#state').html("<b>State</b>: " + name + " (" + id + ")") }); $("path, circle").mouseleave(function(e) { $('#hover-box').css('visibility','hidden'); }); $(document).mousemove(function(e) { $('#hover-box').css('top',e.pageY-$('#hover-box').height()-30); $('#hover-box').css('left',e.pageX-($('#hover-box').width())/2); }).mouseover(); var ios = /iPad|iPhone|iPod/.test(navigator.userAgent) && !window.MSStream; if(ios) { $('a').on('click touchend', function() { var link = $(this).attr('href'); window.open(link,'_blank'); return false; }); }
STACK_EDU
Machine Learning Based Prediction and Impact Analysis of Various Lockdown Stages of COVID-19 Outbreak – A Case Study of India Keywords:COVID-19, machine learning, SVM, SIR model, lockdown predictions. Various measures have been taken into account for the virus outbreak. But how much it successes to control outbreak to fights against COVID-19. Machine learning is used as a tool to study these complex impacts on various stages of the epidemic. While India is forced to open up the economy after an extended lockdown, the effect of lockdown, which is critical to decide the future course of action, is yet to be understood. The study suggests Support Vector Machine (SVM) and Polynomial Regression (PR) are better suited compared to Long Short-Term Memory (LSTM) in scenarios consisting of sparse and discrete events. The time-series memory of LSTM is outperformed by the contextual hyperplanes of SVM which classifies the data even more precisely. The study suggests while phase 1 of lockdown was effective, the rest of them were not. Had India continued with lockdown 1, it would have flattened the COVID-19 infection curve by mid of May 2020. With the current rate, India will hit the 8 million mark by 23 October 2020. The SVM model is further integrated with an SIR (Susceptible, Infected and Recovered) model of epidemiology, which suggests that 70% of India’s population is infected by this pandemic during this 8 month and the peak reached in October 2020 if vaccine not found. With increasing recovery rate increases the possibility of decreasing COVID-19 cases. According to the SVM model’s prediction, 90% of cases of COVID-19 will be end in February. Barmparis, G. D., and Tsironis, G. P. (2020). Estimating the infection horizon of COVID-19 in eight countries with a data-driven approach. Chaos, Solitons and Fractals, 135, 109842. https://doi.org/10.1016/j.chaos.20 Chakraborty, T., and Ghosh, I. (2020). Real-time forecasts and risk assess- ment of novel coronavirus (COVID-19) cases: A data-driven analysis. MedRxiv, 135, 2020.04.09.20059311. https://doi.org/10.1101/2020.04. Chatterjee, K., Chatterjee, K., Kumar, A., and Shankar, S. (2020). Healthcare impact of COVID-19 epidemic in India: A stochastic mathematical model. Medical Journal Armed Forces India, 76(2), 147–155. https: J. Kaur et al. Culp, W. C. (2020). Coronavirus Disease 2019. A & A Practice, 14(6), Ghosh, A., Gupta, R., and Misra, A. (2020). Telemedicine for diabetes care in India during COVID19 pandemic and national lockdown period: Guidelines for physicians. Diabetes and Metabolic Syndrome: Clinical Research and Reviews, 14(4), 273–276. https://doi.org/10.1016/j.dsx.20 Hassanien, A. E., Mahdy, L. N., Ezzat, K. A., Elmousalami, H. H., and Ella, H. A. (2020). Automatic X-ray COVID-19 Lung Image Classifi- cation System based on Multi-Level Thresholding and Support Vector Machine. MedRxiv, 2020.03.30.20047787. https://doi.org/10.1101/2020 Li, L., Yang, Z., Dang, Z., Meng, C., Huang, J., Meng, H., Wang, D., Chen, G., Zhang, J., Peng, H., and Shao, Y. (2020). Propagation analysis and prediction of the COVID-19. Infectious Disease Modelling, 5, 282–292. Magee, L. (2016). Nonlocal Behavior in Polynomial Regressions Author ( s ): Lonnie Magee Published by: Taylor & Francis , Ltd . on behalf of the American Statistical Association Stable URL: http:// www.jstor.or g/ stable/2685560 Nonlocal Behavior in Polynomial Regressions. 52(1), Mahendra Dev, S., and Sengupta, R. (2020). Covid-19: Impact on the Indian Economy. April. https://time.com/5818819/imf-coronavirus-economic- Pandey, G., Chaudhary, P., Gupta, R., and Pal, S. (2019). SEIR and Regression Model based COVID-19 outbreak predictions in India. 1–10. Paul, A., Chatterjee, S., and Bairagi, N. (2020). Prediction on Covid-19 epidemic for different countries: Focusing on South Asia under various precautionary measures. MedRxiv, March, 2020.04.08.20055095. https: Petropoulos, F., and Makridakis, S. (2020). Forecasting the novel coronavirus COVID-19. PLoS ONE, 15(3), 1–8. https://doi.org/10.1371/journal.po Ray, D., Salvatore, M., Bhattacharyya, R., Wang, L., Mohammed, S., Purkayastha, S., Halder, A., Rix, A., Barker, D., Kleinsasser, M., Zhou, Y., Song, P., Bose, D., Banerjee, M., Baladandayuthapani, V., Ghosh, P., and Mukherjee, B. (2020). Predictions, role of interventions and effects of a historic national lockdown in India’s response to the COVID-19 Machine Learning Based Prediction and Impact Analysis 133 pandemic: data science call to arms. MedRxiv, 2020.04.15.20067256. Remuzzi, A., and Remuzzi, G. (2020). Health Policy COVID-19 and Italy: what next? The Lancet, 395(10231), 1225–1228. https://doi.org/10.101 Ribeiro, M. H. D. M., da Silva, R. G., Mariani, V. C., and Coelho, L. dos S. (2020). Short-term forecasting COVID-19 cumulative confirmed cases: Perspectives for Brazil. Chaos, Solitons & Fractals, 135, 109853. https: Roda, W. C., Varughese, M. B., Han, D., and Li, M. Y. (2020). Why is it dif- ficult to accurately predict the COVID-19 epidemic? Infectious Disease Modelling, 5, 271–281. https://doi.org/10.1016/j.idm.2020.03.001 Roosa, K., Lee, Y., Luo, R., Kirpich, A., Rothenberg, R., Hyman, J. M., Yan, P., and Chowell, G. (2020). Real-time forecasts of the COVID-19 epidemic in China from February 5th to February 24th, 2020. Infectious Disease Modelling, 5, 256–263. https://doi.org/10.1016/j.idm.2020.02. Sarkar, K., Khajanchi, S., and Nieto, J. J. (2020). Modeling and forecasting the COVID-19 pandemic in India. Chaos, Solitons and Fractals, 139, Singh, S., Parmar, K. S., Kumar, J., and Makkhan, S. J. S. (2020). Devel- opment of New Hybrid Model of Discrete Wavelet Decomposition and Autoregressive Integrated Moving Average (ARIMA) Models in Application to One Month Forecast the Casualties Cases of COVID-19. Chaos, Solitons, and Fractals, 135, 109866. https://doi.org/10.1016/j.ch Singhal, A., Singh, P., Lall, B., and Joshi, S. D. (2020). Modeling and pre- diction of COVID-19 pandemic using Gaussian mixture model. Chaos, Solitons and Fractals, 138, 110023. https://doi.org/10.1016/j.chaos.20 Singhal, T. (2020). A Review of Coronavirus Disease-2019 (COVID-19). Indian Journal of Pediatrics, 87(4), 281–286. https://doi.org/10.100 Tomar, A., and Gupta, N. (2020). Prediction for the spread of COVID- in India and effectiveness of preventive measures. Science of the Total Environment, 728, 138762. https://doi.org/10.1016/j.scitotenv.
OPCFW_CODE
This tool from MrLinux lets you install VMware tools very easily using. Fix key download url to. As with everything else in the SUSE Blog,. Apparently there is no official deb installer available for installing VMware Tools on Debian Linux. Instead you need to install it manually.Für Debian gibt es bislang für die VMware Tools keine vorgefertigten Operating System Specific Packages (OSPs) von VMware wie es sie etwa für Ubuntu gibt. Da die. Tutorial explaining how to install VMware Tools in Windows and Linux guest operating systems under VMware virtualization. Since Linux Mint is Debian.Installing VMware Tools from VMware’s Repository 5. about the details of the open source packages for VMware Tools by. Installing VMware Tools from. How to Run VMware Tools in Linux Systems. VMware Tools is a necessity while working inside of the VMware application. It provides increases in productivity by using. vmware-tools-patches - Patch and build VMware tools automatically. Copy or download the version of VMware Tools you wish to use into the vmware-tools-patches folder.Official repository of VMware open-vm-tools project. Skip to content. Clone or download. Debian 7.x and later releases.Debian on CDs. If you want to obtain Debian on. or command line tool. Buy finished Debian. of 300 Debian mirrors worldwide for your download. tools and components for VMware guest systems. Download Source Package open-vm-tools:. see the Debian contact page.Tagged with: debian, debian sid, debian squeeze, debian wheezy, how to install vmware player on debian, how to install vmware player on linux mint,. To install VMware Player in Debian/Ubuntu/Linux Mint/Ubuntu derivatives open Terminal (Press Ctrl+Alt+T). (Screenshots) and Download Links.VMware image download. Bodhi Linux 1.2 VMware image * Debian 6 Squeeze image with VMware Tools * Debian 6 Squeeze VMware image * Fedora 14 VMware image * Fedora.Installing VMware Tools On Debian Lenny 5.0.2 With Gnome Desktop On ESX Server 3.5 Update 4 From time to time, installing VMware Tools on a Linux. There is nothing so fancy about installing vmware tools in linux distribution but on some friends demand here we go again. Follow @Computersnyou.How to Install VMware Tools on RHEL 7/CentOS 7. skytech September 2, 2014 How To Guide, Oracle Linux 7, VMware,. Download CentOS 6.4 x86_64 and x86 ISO.How To Set Up VMware Tools On Various Linux Distributions. Get 7 days of FREE downloads. PCLinuxOS 2007 and Debian Etch. Installing VMware Tools.
OPCFW_CODE
After talking with Noggin this morning about his attempts to get his new PPC-6601 to connect his laptop to the internet via USB connection I did some digging. Thanks to the folks at Geekzone I found the following article. After downloading everything and making some modifications I have the following modified for US Sprint (and possibly others) users. Dialer for Windows Mobile CDMA 1x or CDMA EV-DO The zip has two files, I only had to use the .inf file though. You can use either a USB cable or the cradle. # Unzip the contents of the download from this page into a folder on your computer # Turn on PPC-6601 Pocket PC # Start PPC-6601 and run 'WModem', select USB, press 'start' # Plug in cable to PPC-6601 and USB end to laptop (or sit the PPC-6601 in its cradle and plug the USB end to laptop) # When Windows asks for a driver, point it at the directory where CDMA1X_USBMDM.INF is located # Turn on Pocket PC (or your Pocket PC Phone Edition CDMA) # Run 'WModem', select USB, press 'start' # Plug in cable to PPC-6601 and USB end to laptop (or sit the Harrier in its cradle and plug the USB end to laptop) # On laptop click on start/Connect-to/show all connections (or controll panel/networking # Click on Add new connection and you should see this: Click on Next and you should see this: Choose Internet (first one) and click Next. You should see this: Choose the manual setup option and hit next. You should see this: Choose dial-up and hit next. You should see this: Choose the modem installed above and hit next. You should see this: Type #777 for the phone number and hit next. You should see this: Leave all of this blank. REMOVE THE CHECKMARK FOR MAKING THIS YOUR DEFAULT INTERNET CONNECTION you will be sorry if you don't. Hit next. You should see this: Check the box (or not) and click finish. It should show you a screen like this: Click on Properties: Click on Options: Make sure to remove the prompt items (username and phone number) and check security: Make sure it has "Allow unsecure passwords" (we don't use one) Click the networking tab: Make sure the type of server has PPP (not SLIP). Click the advanced tab: Click OK and you should be connected. ActiveSync will work normally when WModem is not started. Click HERE to download the .inf file you need to do all of this.
OPCFW_CODE
'mod_ssl' on Rocky Linux in an httpd Apache Web-Server Environment¶ Apache Web-Server has been used for many years now; 'mod_ssl' is used to provide greater security for the Web-Server and can be installed on almost any version of Linux, including Rocky Linux. The installation of 'mod_ssl' will be part of the creation of a Lamp-Server for Rocky Linux. This procedure is designed to get you up and running with Rocky Linux using 'mod_ssl' in an Apache Web-Server environment.. - A Workstation or Server, preferably with Rocky Linux already installed. - You should be in the Root environment or type sudobefore all of the commands you enter. Install Rocky Linux Minimal¶ When installing Rocky Linux, we used the following sets of packages: Run System Update¶ First, run the system update command to let the server rebuild the repository cache, so that it could recognize the packages available. With a conventional Rocky Linux Server Installation all necessary Repositories should be in place. Check The Available Repositories¶ Just to be sure check your Repository Listing with: You should get the following back showing all of the enabled repositories: appstream Rocky Linux 8 - AppStream baseos Rocky Linux 8 - BaseOS extras Rocky Linux 8 - Extras powertools Rocky Linux 8 - PowerTools To install 'mod_ssl', run: dnf install mod_ssl To enable the 'mod_ssl' module, run: apachectl restart httpd apachectl -M | grep ssl You should see an output as such: Open TCP port 443¶ To allow incoming traffic with HTTPS, run: firewall-cmd --zone=public --permanent --add-service=https firewall-cmd --reload At this point you should be able to access the Apache Web-Server via HTTPS. Enter https://your-server-hostname to confirm the 'mod_ssl' configuration. Generate SSL Certificate¶ To generate a new self-signed certificate for Host rocky8 with 365 days expiry, run: openssl req -newkey rsa:2048 -nodes -keyout /etc/pki/tls/private/httpd.key -x509 -days 365 -out /etc/pki/tls/certs/httpd.crt You will see the following output: Generating a RSA private key ................+++++ ..........+++++ writing new private key to '/etc/pki/tls/private/httpd.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:AU State or Province Name (full name) : Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]:LinuxConfig.org Organizational Unit Name (eg, section) : Common Name (eg, your name or your server's hostname) :rocky8 Email Address : ls -l /etc/pki/tls/private/httpd.key /etc/pki/tls/certs/httpd.crt -rw-r--r--. 1 root root 1269 Jan 29 16:05 /etc/pki/tls/certs/httpd.crt -rw-------. 1 root root 1704 Jan 29 16:05 /etc/pki/tls/private/httpd.key Configure Apache Web-Server with New SSL Certificates¶ To include your newly created SSL certificate into the Apache web-server configuration open the ssl.conf file by running: Then change the following lines: SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCertificateFile /etc/pki/tls/certs/httpd.crt SSLCertificateKeyFile /etc/pki/tls/private/httpd.key Then reload the Apache Web-Server by running: systemctl reload httpd Test the 'mod_ssl' configuration¶ Enter the following in a web browser: To Redirect All HTTP Traffic To HTTPS¶ Create a new file by running: Insert the following content and save file, replacing "your-server-hostname" with your hostname. <VirtualHost _default_:80> Servername rocky8 Redirect permanent / https://your-server-hostname/ </VirtualHost/> Apply the change when reloading the Apache service by running: systemctl reload httpd The Apache Web-Server will now be configured to redirect any incoming traffic from We have seen how to install and configure 'mod_ssl'. And, create a new SSL Certificate in order to run a Web-Server under HTTPS Service. This tutorial will be part of the tutorial covering installing a LAMP (Linux, Apache Web-Server, Maria Database-Server, and PHP Scripting Language), Server on Rocky Linux version 8.x. Eventually we will be including images to help better understand the installation. Contributors: Steven Spencer, David Hensley
OPCFW_CODE
Many RPGs let you use consumable items to attack enemies, create defensive barriers, heal allies, and various other effects. However, as I mentioned in my last post, I find that the extent to which I use such items varies quite heavily depending on which RPG I am playing. In some RPGs, I might use items regularly, while in others I might not use them at all. I am not certain that my experiences are universal, but I do see specific reasons for why I use items more often in some games than in others, and some types of items more than other types. A major factor that determines how often an item gets used is the rarity and cost of the item. If I can only get a few items of a type, and there are no easy ways to get new items of that type, then I will probably never use that item. I will always "save it for later when I will need it more", and that means never using it at all unless there is a particularly difficult moment in the game in which the item is needed (which has only happened once or twice to my recollection). If an item can be used an infinite number of times, or can be acquired in great numbers cheaply, then I will use it freely. Another fairly obvious factor is the overall usefulness of an item. I am fairly likely to use a powerful item (such as Final Fantasy's classic "Megalixer" item), especially if it is stronger than other items for its cost. An item which is weaker than other items for its cost is a lot less likely to be used. To use an example from the Final Fantasy series, if I can afford large numbers of both Potion and Hi-Potion, then I will use more Hi-Potion items simply because they heal more than normal Potions. From this point on my argument gets a bit more complex, so first I will list some specific games (mostly taken from those I mentioned lately) in which I tend to use items, and some games in which I do not. In Chrono Trigger and Final Fantasy games, I only used HP restoration items (mostly only outside of battle), MP restoration items, status restoration items, and Megalixers (only in the final battle). In Atelier Iris 2, I used items of all kinds sporadically. In Ar Tonelico, I only use MP restoration items (and only rarely in boss battles). In Odin Sphere I used many items in every boss battle, and often used items in difficult normal battles. Taking all of this into account, there are a few things I can determine. First, the opportunity cost of an item is extremely important. In the Final Fantasy series, healing items do not heal more than one person, while other magic spells do, so it is often better to use magic spells to heal many people at once in order to save time that could otherwise be used to do other things. On the other hand, in Atelier Iris 2 only one character can heal the whole party at once (and only with mild effectiveness and at high cost), and there are many healing items that can heal the whole party very well, so healing items are more likely to be used in a pinch than character-specific abilities. Players will tend to go with the most efficient strategies, so items will only be used if they are part of those strategies. Second, items will be used more if there are effects that can only be created with an item. In many RPGs, this is mid-battle/mid-dungeon MP restoration. In these games, MP restoration items are some of the most important. Even if items are otherwise pointless, items with necessary unique effects will still be used by the player. Third, the cost of an item is always relative to the cost of other options. In the Final Fantasy series, HP restoration items tend to be useful because it saves MP to be used on other things (such as attack spells), which means that using HP restoration items saves you from using rare and more expensive MP restoration items (since those are the only way to restore MP in a tight spot). In Atelier Iris 2, all character-specific abilities are drawn from a meter that is filled as the battle progresses, so using those abilities has no cost on any permanent resource, making even the cheap and common items of that game have a relatively high cost. Since using the most powerful effect at the lowest cost is the most basic of all strategies, players will tend to use the lowest-cost method to getting any effect. Finally, there is a psychological cost involved in the ease of use of any particular method. Often, item menus are large messes which require far more tedious navigation than other menus. If it is annoying to sort through, then many players will not bother with it, and use options other than items whenever possible. As a whole, the reason I used items more in Odin Sphere than in any other RPG I have ever played, even Atelier iris 2, is because everything in the game is designed to encourage item use. Items have a lower cost than character-specific abilties (it can take countless Phozons to refill the Phozon Gauge, practically as many as will be absorbed by the weapon in the stage). Also, items can be replenished easily through the Alchemy Mix system, and have powerful, unique effects that are necessary to winning major battles. Finally, the item menu is rather easy to navigate. Meanwhile, Atelier Iris 2 does many things that discourage item use, most important of which is the fact that items are relatively costly (despite the Mana Synthesis system making every useful item easy to acquire) and unnecessarily powerful (so strong that the basic Flame item can wipe out a whole enemy group). It is important that games match their systems to the way the game will be played. Players should spend more time and effort on things that will be necessary and useful, rather than on things which will not be useful or are unnecessary. Developers should not waste effort making countless items and systems that will not be used by the player. Items should not become Fake Rewards because they will never be needed due to contradictions in game balance.
OPCFW_CODE
// Desc: Recieves message from content_script. // https://developer.chrome.com/extensions/runtime#event-onMessage // Params: // request: request object, must contain the following: // "event": string identifying an event // "data": any data associated with that event chrome.runtime.onMessage.addListener(function(request, sender, sendResponse){ var badgeText; console.log("[EV] msg recieved: " + JSON.stringify(request)); switch(request.event){ case "setBadge": badgeText = request.data; setBadge(badgeText); break; default: console.log("[EV] nonsense msg"); break; } }); function setBadge(text){ if(!text) return; chrome.browserAction.setBadgeBackgroundColor({ "color": "#3e3a3a" }); chrome.browserAction.setBadgeText({ "text": text.toString() }); } /******************************************************/ /****************** PRIVATE *********************/ /******************************************************/ function isYoutube(url){ console.log("[EV] url: " + url); if(url.startsWith("https://www.youtube.com")) return true; if(url.startsWith("http://www.youtube.com")) return true; if(url.startsWith("https://youtube.com")) return true; if(url.startsWith("http://youtube.com")) return true; if(url.startsWith("www.youtube.com")) return true; if(url.startsWith("youtube.com")) return true; return false; } // use this to execute content script file function executeContentScript(tabId, event){ console.log("[EV] executing content script: " + event); chrome.tabs.executeScript(tabId, {file: "content_script.js"}); } // use this to send msgs to content script function sendToContentScript(msg){ console.log("[EV] sending msg: " + JSON.stringify(msg)); chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { chrome.tabs.sendMessage(tabs[0].id, msg); }); } function __sendChannels(){ chrome.storage.sync.get("channels", function(items){ console.log("[EV] items: " + JSON.stringify(items)); items = items || {}; items.channels = items.channels || {}; sendToContentScript(items); }); } // chPath: string: channel path // chName: string: channel name function __blockChannel(chPath, chName){ chrome.storage.sync.get("channels", function(items){ console.log("[EV] __blockChannel, old: " + JSON.stringify(items)); items = items || {}; items.channels = items.channels || {}; //mark as blocked/unblocked if(items.channels[chName]) delete items.channels[chName]; else items.channels[chName] = chPath; //update storage chrome.storage.sync.set(items, function(){ if(chrome.runtime.lastError) console.log("[EV] error: " + JSON.stringify(chrome.runtime.lastError)); else console.log("[EV] saved"); __sendChannels(); }); }); } /************ TESTING related code **********/ chrome.runtime.onMessage.addListener(function(request, sender, sendResponse){ if(!request.test) return; //recieve test message console.log("[EV] test msg recieved: " + request.text); // send a test RESPONSE sendResponse("test response from events.js"); // send a test MESSAGE chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { var msg = { "test": true, "text": "test message from events.js" }; chrome.tabs.sendMessage(tabs[0].id, msg, function(response) { console.log("[EV] response to test msg recieved: " + response); }); }); }); // can be used if needed /* chrome.webNavigation.onDOMContentLoaded.addListener(function(details) { console.log("[EV] details: " + details.tabId); if(!isYoutube(details.url)) return; executeContentScript(details.tabId, "onDOMContentLoaded"); }); chrome.webNavigation.onCompleted.addListener(function(details) { console.log("[EV] details: " + details.tabId); if(!isYoutube(details.url)) return; executeContentScript(details.tabId, "onCompleted"); }); */
STACK_EDU
Creating Breaks in Content3:23 with Guil Hernandez Learn to create line breaks in your content using <br>, and thematic breaks with <hr>. As you learned at the beginning of the course, HTML condenses white space like 0:00 consecutive spaces, tabs or new lines into a single space. 0:04 Sometimes you'll need to create a break in the content 0:09 without having to specify a new paragraph or heading tag. 0:12 For example, if you're marking up a poem, an address, or any text where the division 0:16 of lines are significant, you can force breaks in content with the br tag. 0:21 So first, down in the contact section of index.html, 0:26 I'm going to write an address using HTML's address tag, 0:30 The address element represents contact information for a person, people, or 0:39 organization. 0:44 So inside the address tags, 0:45 I'll write an address so 0:49 let's say Experience VR, and 0:53 below that we'll say 2017 Virtual Way, 0:58 City comma State, 33437. 1:04 Give this a save, refresh the browser. 1:09 And as expected, the browser displays each part of the address on the same line. 1:13 So let's use the br element 1:18 to instruct the browser that we want a line break after each line in the address. 1:20 Now, br is an empty element, just like the image element, so 1:26 it does not require a closing tag. 1:29 So after the first line, we'll add br and we'll do the same for the second line. 1:32 Refresh the page and as you can see the brs produce line breaks in the text, so 1:39 now the address is on three separate lines. 1:44 The hr or 1:49 horizontal rule element represents a thematic break in your content. 1:50 Imagine a scene change in a story or 1:54 a transition to another topic within a section of a reference book. 1:56 In HTML, hr can indicate to the browser the end of one section and 2:00 a start of another. 2:05 It separates different topics with in a section of content. 2:06 So here in index.html, the aside element near the bottom of the file contains 2:09 additional content about VR resources and the famous quote about virtual reality. 2:15 So let's add a horizontal rule between the two sections of 2:21 content to indicate a break. 2:25 And by default, the browser displays an hr as a horizontal line. 2:32 Now you may see empty elements like br, 2:38 hr, even img, written with a trailing slash, like so. 2:41 And the trailing slash clearly indicates that these are empty elements 2:47 with self-closing tags. 2:51 The slash is optional and both are valid HTML. 2:52 Now it's important that you use br and hr properly. 2:56 So for example, you shouldn't use hr just for 3:00 the sake of aesthetics to display a line in your site. 3:03 Likewise, you shouldn't use two, three, four or 3:07 more br tags at once just to add space between lines of text. 3:09 Aesthetics are best handled by CSS or Cascading Style Sheets. 3:14 And you'll learn lots more about styling content with CSS in a later course. 3:18 You need to sign up for Treehouse in order to download course files.Sign up
OPCFW_CODE
Razor (UO:R Community Edition)¶ Razor is a free tool designed to help with simple tasks while playing Ultima Online. This guide was written for the Razor UO:Renaissance Community Edition. The goal of this project is to revive and continue development and maintenance of the abandoned Razor project, focusing on quality of life improvements while attempting to keep the spirit and vision the original developers had for Razor and not driving down the path of advanced automation and scripting that's found in other UO assistants. Razor was originally designed by Bryan Pass, known as Zippy in the RunUO community as a replacement to UOAssist. Based on commit notes, active development (new features, bug fixes) on Razor ceased some time in the early 2010's with the source code being released in 2014. The code initially didn't include Loader.dll which are required to fully integrate with the UO client. At some point, the code for those projects released into the same GitHub repo. The original project was last updated May 2nd, 2014 which was simply an update from .NET 2.0 to .NET 4.0 and while over 50 forks exist on GitHub, none of them have been active or have made significant changes except for a few exceptions: Jaedan(which this version is based on) who updated the project to compile and work in Visual Studio 2017 and made improvements to Crypt.dll that enabled this project to move forward. SaschaKPwho made several performance changes from generic to non-generic collections that I incorporated in the first release. I have been actively maintaining this project since early April 2018 and based this version off of 188.8.131.52, which was simply the version 184.108.40.206 updated to .NET 4.0. Another version of Razor exists (the 1.0.14.x versions) and is/was maintained by another private shard that make some enhancements, notable around targeting. This version of Razor has long since incorporated the majority of changes you can find in that version. In June 2019 integration into ClassicUO was officially established. UO:R Community Edition¶ When I started this project back in early 2018, nearly all the feedback, ideas, discussion and testing has came from the UO:Renaissance community where the rules there only allow for the use of Razor and so that name was used to not only distinguish between other versions of Razor that are available but give credit to a community that provided so much support early on. Since then, this version of Razor was updated to support the ClassicUO client with feedback and contributions coming from all different corners of the freeshard UO community -- from large to small shards. Thank you to all the folks across the whole community who have contributed in some way towards creating this version of Razor. If you'd like contribute, see the CONTRIBUTING file for more information. TL;DR: If you want to use this version of Razor, regardless of the Ultima Online server you play on, this version should work. This version isn't tied to any specific shard. These updates to Razor are for all the Ultima Online Community to use and benefit from. Play UO on the shard that gives you the most enjoyment. For me, that shard is UO:Renaissance. If you're unable to find a solution using the information here or you'd like to submit a feature request or bug report, use the following resources. - Submit a feature/bug/issue on our official Razor GitHub Repo. - Join us in #razor on Discord (this is the official ClassicUO Discord server) For more information about the Razor Scripting Engine, go here. All work is released under the GPLv3 license. This project does not distribute any copyrighted game assets. In order to run this application you'll need to legally obtain a copy of the Ultima Online Classic Client. See the LICENSE file for details.
OPCFW_CODE
Ah, that's a significant departure, I'd say, from the basic fantasy game template, and quite an interesting idea. Between-game continuity. TA: By this time we were operating in spite of the games we were playing, rather than because of them, I think. It was 1998, 3D FPSs had already been out and popular for five years or so, I guess, and I was hardly playing anything anymore. A lot of the games we played, like the Ultimas, also kind of got us into thinking about the worlds themselves, rather than just playing a game in one. So I hit a Borland compiler limit, 65K something or other, and moved on to other projects. I didn't see a monetary future in my games, and I wasn't into CS as an academic pursuit, so I got serious about math for a few years, and went to grad school. The summer before I went to grad school, though, we restarted the fantasy project. This time, it was called Slaves to Armok: God of Blood, named after Armok, the god from dragslay. Armok himself was named after "arm_ok", a variable that counts the number of arms you have left, for inventory purposes. This was a 2D project in a somewhat-isometric view, where you walked around a cave with a bunch of goblins in loincloths. It was entertaining, but short-lived. I got started on the actual Slaves to Armok that I released on Bay 12 around 2000-2004 or so. This was our fantasy game. Lots of complex things going on, and of course, a boatload of plans. It was unwieldy, and got even more so when I went 3D. This is the first piece of Dwarf Fortress. Now, you might have seen the various, questionable games littered throughout my site. I would occasionally take time off from Armok and grad school to write really short projects on weekends or whenever I found time or had an idea. A few of them got released, and many others just died before they saw the light of day. A game called Mutant Miner is one of these. It was roughly inspired by Miner VGA and Teenage Mutant Ninja Turtles mutagen, or something. As in Miner VGA, you'd dig out tunnels in the ground under a few buildings, looking for minerals and deal with threats. It's turn-based. In Miner VGA you can find many things in the ground. In Mutant Miner, we added green radioactive goop. You could take it back to one of the buildings at the top, and apply it to yourself to grow extra arms and other mutations that would help you combat threats down below. Which were just, like, these holes that would spit out enemies or blocks of slime that you'd encounter in the mountain. I eventually wanted to put in extra miners though, and since it was turn-based, it started to drag like a battle in an SSI Gold Box game. And instead of rewriting the game, I thought, well maybe it should be dwarves instead. And it should be real-time, to combat the SSI problem. Now, you'd be digging out minerals in a mountain, combating threats inside, and making little Then I thought, well, how should the high score list work? We really like to keep records of plays. Not just high score lists, but expansive logs. So we'll often try to think of ways to play with the idea. This time, the idea was to let your adventurer come into the fortress after you lose and find the goblets you've made, and journals it generates. If your adventurer successfully brought these back to town (after facing threats in the now-abandoned fortress), the player would get to see the fortress' stats. For instance, if they found a journal that said "This month, we produced 3 silver goblets...," they get the entire set of stats on silver goblet production in the score list. That was the idea. It was supposed to take two months. I started in October 2002. Creating the game? TA: Yeah, longer games took a few weeks, most of the other ones up there took 2 days (ww1medic, corin, Kobold Quest, etc.) so I didn't think it was a bad estimate. And it might not have been, but I called up my brother and we just kept planning it out. It became obvious that it was really stealing thunder from Armok which was right in the middle of its life cycle at this time, so DF development was actually stopped that November and I went back to Armok, which was still going all right, and various other projects.
OPCFW_CODE
In order to automate this process Jmeter-plugins has come up with command line utility to convert any JTL file data to Graphical representation and CSV format as well. Hopefully a great must have Open Source and free third-party library called JMeter-Plugins comes to the rescue. This command-line utility receives a directory containing a multi-module Maven project and will output a graph displaying the inter-dependencies of modules. JMeter Jenkins Plugin is capable of parsing those lines and output graphs when running JMeter on Jenkins. This video is unavailable. At the end of this tutorial you would be able to create JMeter Live Test Result dashboard similar to following -. We run our test from the command line by invoking: jmeter -n -t test. Hard dependency: A chart may contain (inside of its charts/ directory) another chart upon which it depends. In any node-based process, you can invoke Cordova Simulate using the simulate() method. You need to be quite comfortable with command line usage to use this. We will cover more information about this in the sections below. Also - keep in mind that using the External Test API will only generate these metrics in the Test Automation Dashlet. If you don’t want your computer to consume more ways to run JMeter, then you can open JMeter in command line mode according to your choice. x or higher). JMeter can show. Using the GUI zip tools are easy and user friendly, but if you want some more advanced options with better compression you can turn to the command line to make a tar and gzip archive. How it works. properties file configured to output via the Summariser and I'm also outputting another [more detailed]. Evaluation of JMeter for SDC Test Automation. View and search the history. Where the arguments we pass on the command line are:-n - run in headless/non-gui mode. 7\bin; Jenkins:Download Jenkins from the Jenkins site, and deploy it. How to install a. I've recorded a session and set up a Save Responses to a file sampler to extract the kaptcha image. Hopefully, after carefully reading and better understanding how the command-line works, you should be able to accomplish complex image-processing tasks without resorting to the sometimes daunting program interfaces. One of Telegraf's biggest strengths is the large collection of plugins it offers that can be used to immediately to start collecting data from a variety of applications. Mocha theme. Running a load test from the Command Line Interface, or CLI, can help alleviate that by using significantly less resources. 24) API Webservices testing in. set viminfo='200,\"50,:20,%,n~/. The app is downloaded as a JAR file. The Airflow UI makes it easy to monitor and troubleshoot your data pipelines. If you want to bind analyze in your pom, use the dependency:analyze-only mojo instead. Skype: harano1109 Email: [email protected] properties file or on the command-line. org: jmeter-pluginsgooglegroups. Tales Of Hoffmann, Fantastic Opera In Three Acts, Jules Barbier T. Use this command to export selected design attached to the routine, which is capable of printing the design. dot - "hierarchical" or layered drawings of directed graphs. See Pylint command-line arguments for general switches. I want to have only 30. To preserve system resources during tests, I run JMeter on the command line. Advanced JMeter users rarely perform regular JMeter tests via GUI, they prefer to set up test and run it in non-GUI mode, via command-line. Before version 1. First things first; you need to have these App/plugins on your machine: i) Latest version of java * ii) Apache JMeter. We run the tests from the command line, so we only have one build step to do that. Posts about JMeter written by Stratos Tso. properties also applicable), or from command line like -J "loadprofile="_. Jmeter test plans (scripts) can be shared with the product development team to run on their local or dev environment to get performance insight at an early stage in the lifecycle of the product. Go to the JAR file location and Execute the following command in the second new terminal window. reportgenerator. command line interface and easy integration with a variety of programming languages. For testng and other frameworks, i will provide separate posts. 将JMeter job构建到Jenkins上以后,生成的是jtl文件,没法向打开JMeter查看Response结果树一样直观查看最终测试结果,那我们可否将这些文件转换成png图片格式,并且放到邮件中呢?. Why use the command line? 1m 6s Opening JMeter through the CLI. Jmeter Plugins Manager Failed To Download Plugins Repository 1 JMeter Plugins Manager Failed to download plugins repository Jan 30. properties (or user. jtl -e -o /path/to/dashboard/folder See Full list of command-line options for all possible JMeter command-line arguments listed and explained. Therefore, it is best to use it in command-line non-GUI mode while performing actual load tests. Basic Graphs Plugins License: Apache 2. -It is better to use thread group using from plugin. Loading Close. x as of SQuirreL version 3. This is the most important part of JMeter as we analyzed the behaviour of AUT along with graph and checking the performance. JSON Path Extractor which comes with JMeter Plugins which is more handy to. You can either deploy in the tomecat etc server or run the command from the command line Java –jar Jenkins. For saving the resource, you may choose to run JMeter without. And, we'll use the build output of that project to see the report generated by the Jenkins Performance plugin. Let's see an example of Simple Data Writer in use. Any errors in the test plan or the test itself will appear in the log files that are available online under the Logs tab. Apache JMeter™ The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. Press "Open Code Graph Window" to show the window. Plugin manager from command line What is the command to use plugin manager (like 3 basic graphs) in non-gui mode (Jmeter 5. Walkthrough: Compiling a Native C++ Program on the Command Line. Its architecture is based on plugins. if it has the graphs that you want then I suggest you use it. He is currently a committer and a PMC member of the JMeter project within the Apache Software Foundation. Git History, Search and More (including git log). However, there may be occasions when you want to combine multiple files: for example if a test was run in non-GUI mode on multiple hosts, each host will generate its own file. I also failed to the the maven exec plugin to create the command line arguments I needed. Overview: In this article, I will explain how we could get real time performance test results using JMeter + InfluxDB + Grafana. properties file or on the command-line. Installation The tool is placed inside distribution ZIP. If, for whatever reason, you stopped a download before it could finish, don’t worry: wget can pick up right where it left off. Free wimax layer download - wimax layer script - Top 4 Download - Top4Download. The Atlassian application (Jira, Confluence, or any of the others) will be installed with the REST API Browser plugin enabled. Copy JMeter Aggregator for Bamboo file into your Bamboo installation directory under /WEB-INF/lib/. Gain real-time visibility into stacks, sensors, and systems with InfluxData open source time series database products. xml) - Nagios plugin performance data support. - JMeter Course in HCM City - Follow JMeter VN on WordPress. Immunity Debugger's interfaces include the GUI and a command line. jtl hostids but I am not getting any file a. Date: using command in. Shutdown your Bamboo instance. A plugin to include the functionality of JMeterPluginsCMD Command Line Tool to create nice graphs from jmeter result files. jmeter -Jsample_variables=ExampleVar -n -t /path/to/testplan. JMeter Plugins at Google Code ([email protected] has a command-line tool that can consume JTL files and produce the same graphs as JMeter GUI plugins. When there is the need to create load tests or performance tests for an application, Apache JMeter is a handy tool and set up with ease. JMeter Course Curriculum JMeter Training course videos will help you learn latest JMeter 5 concepts like script building, thread groups, controllers, processors, timers, listeners, bean shell scripting, config elements, load generation and analysys. It will result in less memory used, lesser points in the Graphs. I figured out a way to transcode my videos so that they would play at double speed using the GUI as "Convert/Stream" --> "Show More options" and adding :rate=2. Introduction. It’ll show processor and memory are. For each diagram, a. Start by installing Highcharts as a node module and save it as a dependency in your package. The first extension I wrote for JMeter was a simple utility to parse HTTP access logs and produce requests in XML format. threads=10. Reports generated during the load tests are saved at a location specified by the user. You can use them for parts of Cypher queries, or you can call them as standalone procedures from applications, scripts, command line, etc. Response Time Graph ) automatically Command Line Tool which Graphs Generator plugin in your Apache. This book is an updated version (started by maijin) of the original radare1 book (written by pancake). I've recorded a session and set up a Save Responses to a file sampler to extract the kaptcha image. I am trying to use JMeterPluginsCMD Command Line Tool [http://jmeter-plugins. To find the references to hp. There are several steps you can take to ensure your test environment is optimized for running your load tests. JMeter Jenkins Plugin is capable of parsing those lines and output graphs when running JMeter on Jenkins. Both our SaaS load testing solution and our on-premise Enterprise Edition comes with a Web UI. Fri, 18 Aug 2017. Use this command to export selected design attached to the routine, which is capable of printing the design. This article includes a set of most popular JMeter interview questions and answers along with examples in simple terms, which in turn will enable you to understand the concept better and thereby help you to clear any interview successfully.
OPCFW_CODE