Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Here are 15 Common Questions about Oracle: 1. What is Oracle? Oracle is a relational database management system (RDBMS) that supports a wide variety of workloads, including transaction processing, analytics, and mixed workloads. It offers customers high availability, scalability, security, and performance. 2. Who uses Oracle? Organizations of all sizes use Oracle databases to run their businesses. For example, banks use Oracle to process transactions and manage customer data; retailers use Oracle to track inventory and analyze customer buying patterns, and airlines use Oracle to store flight information and passenger reservations. 3. What are some of the features of Oracle? Some of the key features of Oracle include: - Advanced Analytics: This allows you to perform in-database analytics, including predictive and statistical modeling, data mining, text mining, and machine learning. - Application Development: Oracle provides a complete set of tools for developing applications that can be deployed on-premises or in the cloud. - Big Data Management: Oracle helps you manage big data with capabilities such as Hadoop integration, NoSQL support, and scale-out architecture. - Cloud Computing: Oracle offers a comprehensive and integrated set of cloud services that allow you to build, deploy, and manage applications in the cloud. - Database Security: Oracle provides industry-leading security features that help you protect your data from unauthorized access, theft, and corruption. 4. What are some of the benefits of using Oracle? Some of the benefits of using Oracle include: - Flexibility: Oracle offers a wide range of options for deploying databases, including on-premises, in the cloud, and in hybrid environments. - Scalability: Oracle databases can be easily scaled up or down to meet the changing needs of your business. - Security: Oracle provides industry-leading security features that help you protect your data from unauthorized access, theft, and corruption. 5. What are some of the drawbacks of using Oracle? Some of the drawbacks of using Oracle include: - Cost: Oracle databases can be expensive to license and maintain. - Complexity: Oracle databases can be complex to manage and administer. 6. What types of workloads does Oracle support? Oracle supports a wide variety of workloads, including transaction processing, analytics, and mixed workloads. 7. What is Oracle’s licensing model? Oracle offers a number of different licensing models, depending on the type of products you use. For example, Oracle Database Enterprise Edition requires a license for each server on which it is installed, while Oracle Database Standard Edition requires a license for each database instance. 8. How much does an Oracle database cost? The cost of an Oracle database depends on a number of factors, including the edition you choose, the number of users, and the amount of data you need to store. 9. How do I get started with using Oracle? If you’re new to using Oracle, the best way to get started is to take advantage of the many resources that are available, including books, online tutorials, and Oracle’s own documentation. 10. What versions of Oracle are available? Oracle offers a number of different versions of its database products, including Enterprise Edition, Standard Edition, and Express Edition. 11. What platforms does Oracle support? Oracle databases can be deployed on a wide variety of platforms, including Linux, Windows, and macOS. 12. What languages does Oracle support? Oracle databases can be used with a variety of programming languages, including Java, C++, and Python. 13. What tools does Oracle provide for managing databases? Oracle provides a number of tools for managing databases, including the Oracle Database Control Console, the Oracle Enterprise Manager, and the Oracle SQL Developer. 14. What training and certification does Oracle offer? Oracle offers a variety of training and certification programs for its products, including the Oracle Certified Professional program. 15. Where can I find more information about Oracle? If you’re looking for more information about Oracle, the best place to start is the Oracle website. There you’ll find a wealth of resources, including product documentation, online tutorials, and community forums. Oracle is a powerful database management system that offers a wide range of features and capabilities. It can be expensive to license and maintain, but it is very scalable and offers excellent security features. Oracle is a good choice for businesses that need a robust database management system that can handle a variety of workloads.
OPCFW_CODE
Trabalho Não Encontrado Desculpe, não conseguimos encontrar o trabalho que você estava procurando. Encontre os trabalhos mais recentes aqui: I'm looking for a web designer that can help with brand identification, logo design, and web design for a software/advisory company. We are looking for a full time search engine marketing expert with a focus on organic search engine marketing to manage the SEO program on websites focused on selling gourmet foods online. The candidate should have a proven track record with SEO projects and increasing organic rankings and have a solid foundation in the steps and tasks required to get the job done. Candidate must speak and write... I have installed magnusbilling whicj comes with Asterisk 13 on Debian I want someone to configure webrtc clients and provide " how to " guide file Thanks Soy una emprendedora, inicie como asistente de compras en linea pero deseo expandirme creando una coleccion accequible y atemporal, basandome en mujeres fuertes, inteligentes e importantes de mi entorno. Quiero proyectar el impacto que tiene la manera de vestir en la forma de vernos a nosotros mismos. Por eso, mi primera colección va a ir orientada a empoderar a jovenes y señoritas (... Busco un experto en marketing que nos pueda ayudar a ingresar al ambiente digital para exporner nuestra empresa a mas personas. Si te interesa el proyecto, envia un mensaje sin ningun problema. An entrepreneur, notices that his mobile tonnage chain is successful and decides to expand his business to other cities around the world. He realizes that he needs a method to calculate in time the areas with the largest population, to then select how to redistribute their tons on each area. He consults with other people in the IT area and realizes that he needs a special algorithm that uses all t... Are you familiar with the payment system SILKPAY? As there is no module for this payment system we would like to find an experienced developer who is able to add this to our website via configuration and coding If you are able to carry this out then apply or do give us feedback. Thank you We need manual Wight Hat SEO 150+ Backlinks. like social bookmarking, article submission, web 2.0 profile creations. As per your choice. Note: don't use any kind of 3rd party tools or software. We have a SharePoint site with multiple lists and document libraries We want it a Dashboard to be developed in Power BI and integrate in SharePoint for management view. Need this job to be done ASAP NOTE: Bid only if you have proven expertise Hi there, I need a quote for the electrical installation of electric vehicle charging stations in Malaysia. Here are the specs: 1. Underground parking garage 2. 100ft of cabling (single or 3 phase) 3. 32A current capacity 4. Wall mounted installation I need a quote who can get this job done for me. I need a list of all the necessary components, tools, permits, and costs for material and labor.
OPCFW_CODE
<?php declare(strict_types=1); namespace Majermi4\FriendlyConfig\Exception; use Majermi4\FriendlyConfig\ParameterTypes; use ReflectionParameter; use RuntimeException; class InvalidConfigClassException extends RuntimeException { public const MISSING_CONSTRUCTOR = 1; public const MISSING_CONSTRUCTOR_PARAMETERS = 2; public const MISSING_CONSTRUCTOR_DOC_COMMENT = 3; public const INVALID_CONSTRUCTOR_DOC_COMMENT_FORMAT = 4; public const UNSUPPORTED_CONSTRUCTOR_PARAMETER_TYPE = 5; public const UNSUPPORTED_NESTED_ARRAY_TYPE = 6; public static function missingConstructor(string $className): self { return new self( sprintf( 'Class "%s" must declare a constructor so it can be used by FriendlyConfig to set Symfony config parameters.', $className, ), self::MISSING_CONSTRUCTOR ); } public static function missingConstructorParameters(string $className): self { return new self( sprintf( 'Class "%s" must declare at least one constructor parameter so it can be used by FriendlyConfig to set Symfony config parameters.', $className, ), self::MISSING_CONSTRUCTOR_PARAMETERS ); } public static function missingConstructorDocComment(string $className): self { return new self( sprintf( 'Constructor of class "%s" must define a PhpDoc with type declaration for its array types (such as "array<string>") so it can be used by FriendlyConfig to set Symfony config parameters.', $className, ), self::MISSING_CONSTRUCTOR_DOC_COMMENT ); } public static function invalidConstructorDocCommentFormat(string $parameterName): self { return new self( sprintf( 'Constructor parameter "%s" must have a PHPDoc annotation in the following format "@param array<T> $%s" where T is a nested type.', $parameterName, $parameterName, ), self::INVALID_CONSTRUCTOR_DOC_COMMENT_FORMAT ); } public static function unsupportedConstructorParameterType(ReflectionParameter $parameter): self { /* @phpstan-ignore-next-line */ $className = $parameter->getDeclaringClass()->getName(); /* @phpstan-ignore-next-line */ $parameterType = $parameter->getType()->getName(); return new self( sprintf( 'Constructor parameter "%s" of class "%s" must be one of the supported types (%s) or a valid class so it can be used by FriendlyConfig to set Symfony config parameters. "%s" given.', $parameter->name, $className, implode(', ', ParameterTypes::SUPPORTED_PRIMITIVE_TYPES), $parameterType, ), self::UNSUPPORTED_CONSTRUCTOR_PARAMETER_TYPE ); } public static function unsupportedNestedArrayType(ReflectionParameter $parameter, string $nestedType): self { /* @phpstan-ignore-next-line */ $className = $parameter->getDeclaringClass()->getName(); return new self( sprintf( 'Constructor parameter "%s" of class "%s" must define valid nested array type so it can be used by FriendlyConfig to set Symfony config parameters. "%s" given.', $parameter->name, $className, $nestedType, ), self::UNSUPPORTED_NESTED_ARRAY_TYPE ); } }
STACK_EDU
use octocrab::models::repos::Object; use octocrab::models::Repository; use octocrab::Octocrab; use octocrab::params::repos::Reference; use serde_json::Value; use warp::{Rejection, Reply}; use warp::http::StatusCode; pub async fn run() -> Result<impl Reply, Rejection> { let token = read_env_var("GITHUB_TOKEN"); let octocrab = Octocrab::builder().personal_token(token).build().unwrap(); let repo_list = vec!["Jackett"]; for repo_name in repo_list { let repo = octocrab.repos("htynkn", repo_name).get().await.unwrap(); match repo.parent { None => { info!("current repo:{} not have parent info, skip", repo_name) } Some(parent_repo) => { match parent_repo.full_name { None => { info!("current repo:{} not have full name, skip", repo_name) } Some(full_name) => { let parent_repo: Repository = octocrab.get(format!("/repos/{}", full_name), None::<&()>).await.unwrap(); info!("find parent:{} with branch:{:?}",full_name,parent_repo.default_branch); let default_branch = parent_repo.default_branch.unwrap(); let parent_ref = octocrab.repos( parent_repo.owner.unwrap().login, parent_repo.name, ).get_ref(&Reference::Branch(default_branch.to_string())).await.unwrap(); let sha = if let Object::Commit { sha, .. } = parent_ref.object { sha } else { panic!() }; info!("find ref {:?} to repo:{}", sha.to_string(), full_name); let x: Value = octocrab.patch(format!("/repos/htynkn/{}/git/refs/heads/{}", repo_name, &default_branch), Some(&serde_json::json!({ "sha": sha.to_string() }))).await.unwrap(); info!("update success for {} with {:?}", full_name,x); } } } } } Ok(StatusCode::OK) } fn read_env_var(var_name: &str) -> String { let err = format!("Missing environment variable: {}", var_name); std::env::var(var_name).expect(&err) }
STACK_EDU
Schema.org microformatting - itemprop within href tag? I'm trying to implement microformatting on the site, specifically for the cities where we are active. I'm hoping this will help us rank in local search. This is what I have been doing: In Google's Rich Snippets Testing Tool, that yields this: addresslocality = City Name However, I've also done this: In Google's tool, that gave me this: text = City Name href = http://www.domain.com/webpage So which is better? I've read schema.org quite a bit, and I know that microformats are really meant to be used on individual pages, but I didn't know it was considered inappropriate to use them on internal links. I'll rethink this. Thanks! Rich Snippet Markup should be implemented on the page containing the information not on links pointing to the pages. Difficult to really advise on which approach would be best because from what I can gather, it doesn't sound like you're implementing the markup correctly. Better to mark up the content on the location pages themselves with the address, locality, city/town etc as per my example above. Might be worth reading up on exactly how Schema data works. Yes, we have individual pages based on location. I want internal links to those pages to carry microformatting. I put the itemprop attribute within a tag, which seems to work fine, but putting the itemprop within an <a>tag seems to apply to both the text (i.e. the city name) and the URL of the page. I'm wondering if the second approach is better. </a> What are you trying to achieve? Do you have a page per location on your webstite and on each, this is where you're implementing geo/location based schema? Which is better depends on its purpose, schema markup is intended to provide as much information to search engines as possible, anything of relevance to a website's products or services. It is likely what you're after is the organisation markup, this can also appear in Google Places listings too. Look at the Microdata link at the bottom of that page and implement what is relevant to your website from there. Currently rich snippet markup won't have a direct influence on rankings, however the correct implementation will make results from your website appear more prominent in search results. It would be worth looking over the specification for Schema's organisation / local business markup too so you can see what is available to you. Something such as the following should be relevant to what you are trying to achieve... Address Line 1 Address Line 2 City / Town County / State Post / Zip Code 777 888 999 Hope that points you in the right direction for now. Explore more categories Chat with the community about the Moz tools. Discuss the SEO process with fellow marketers Discuss industry events, jobs, and news! Chat about tactics outside of SEO Dive into research and trends in the search industry. Connect on product support and feature requests. schema schema markup serp features Hello Moz Team, I hope everyone is doing well & good, I need bit help regarding Schema Markup, I am facing issue in my schema markup specifically with my blog posts, In my majority of the posts I find error "Missing field "url" (optional)"Technical SEO | | JoeySolicitor As this schema is generated by Yoast plugin, I haven't applied any custom steps. Recently I published a post https://dailycontributors.com/kisscartoon-alternatives-and-complete-review/ and I tested it at two platforms of schema test 1, Validator.Schema.org 2. Search.google.com/test/rich-results So the validator generate results as follows and shows no error Schema without error.PNG It shows no error But where as Schema with error.PNG in search central results it gives me a warning "Missing field "url" (optional)". So is this really be going to issue for my ranking ? Please help thanks!6 Hi, This might seem silly. What is the correct syntax for the meta tag used when noindexing webpages? I have "". I have seen it both with and without the forward slash before the greater than sign. Does it make any difference if the forward slash is present or not? CheersTechnical SEO | | McCaldin0 Hi. I'm using Squarespace, and I've noticed they assign the page title and site title h1 tag status. So if I add an on-page h1 tag, that's three in total. I've seen what Matt Cutts said about multiple h1 tags being acceptable (although that video was back in 2009 and a lot has changed since then). But I'm still a little concerned that this is perhaps not the best way of structuring for SEO. Could anyone offer me any advice? Thanks.Technical SEO | | The_Word_Department0 Hi Mozzers, We have a website that has both http as well as https indexed. I proposed the solution of implementing a canonical link tag on all pages (including the login/secure ones). Any disadvantages I could expect? Thanks!Technical SEO | | DeptAgency0 hi, my site is www.in2town.co.uk I am using an seo tool to check on my site and how to improve the seo. The tool is here. http://www.juxseo.com/report/view/51ebf9deab900 for some reason it has brought up errors, it claims i have not got a meta description even though i have and have doubled checked in my source code the errors it has brought up is as follows, and i would like to know if this is a fault of the seo tool or am i doing something wrong Does the description tag exist?0/1 <a id="sub_toggle_12" class="sub_toggle contract_sub"></a>Hide Info Description Tag: Explanation: The meta description tag does not help your rankings but it is your opportunity to encourage prospects to click. The meta description should describe the content of your web page, include a strong call to action, and include your keyword. Action: Make sure you are using the meta description tag. It is found in the section of your page. checkboxIs there only one description tag?0/2<a id="sub_toggle_13" class="sub_toggle expand_sub"></a>More InfocheckboxIs your description less than 156 characters?0/1<a id="sub_toggle_14" class="sub_toggle expand_sub"></a>More InfocheckboxIs your keyword in the description tag?0/3 <a id="sub_toggle_15" class="sub_toggle expand_sub"></a>More Info it also says about the canocial tag which it claims i have more than one Is the canonical tag optmized? Is there only one canonical tag?0/4 <a id="sub_toggle_10" class="sub_toggle contract_sub"></a>Hide Info Explanation: You only need one of these to direct a search engine. Don't muddy the waters. Action: Make sure you only have one canonical tag. This only applies if you use the canonical tag. any help and advice would be greatregardsTechnical SEO | | ClaireH-1848860 I have a website that has a contact us page... of course and on that page I have schema info pointing out the address and a few other points of data. I also have the address to the business location in the footer on every page. Would it be wiser to point to the schema address data on the footer instead of the contact page? And are there any best practices when it comes down to how many times you can point to the same data, and on which pages? So should I have schema address on the contact us page and the footer of that page, that would be twice, which could seem spammy. Haven't been able to find much best practices info on schema out there. Thanks, CyTechnical SEO | | Nola5040 I am seeing with Webmaster Tools that Google is trying to follow the text based truncated URL from SuperPages despite the fact that they are not in an anchor tag. The net result for Google is a 404 error as they try to access pages that do not exist. has anyone seen this issue before and any suggestions on how to prevent these errors? A Superpage listing: The first link works fine, but the text based link shown below is cut off and as a result Google gets to a 404 page. http://swbd-out.superpages.com/webresults.htm?qkw=dr+hylton+lightman&qcat=web&y=0&x=0& Dr. Hylton Lightman, MD - Pediatrician, Allergist, - Far Rockaway ... Dr. Hylton Lightman, MD, Far Rockaway, NY, Rated 4/4 By Patients. 26 Reviews, Patients' Choice Award Winner, Phone Number & Practice Locations www.vitals.com/doctors/Dr_Hylton_Lightma... [Found on Bing]Technical SEO | | irvingw0
OPCFW_CODE
Thanks to all the people who sent me a plugin for vCheck 6.0, these have now been published as part of vCheck 6.10 and zipped into a single download file which can be downloaded from below. I also love the enhancements which have been sent to me for the core vCheck script, these include a script which times and reports on how long the plugins take – great for troubleshooting ! As always you can find more information about vCheck and how to use this by visiting this page. There are now over 70 plugins, some fantastic stuff, below is a list of items which have changed or been fixed since 6.0: # v 6.10 – Fixed multiple spelling mistakes and small plugin issues # v 6.9 – Fixed VMKernel logs but had to remove date/Time parser due to inconsistent VMKernel Log entries # v 6.8 – Added Creator of snapshots back in due to popular demand # v 6.7 – Added Multiple plugins from contributors – Thanks! # V 6.6 – Tech Support Mode Plugin fixed to work with 5.0 hosts # V 6.5 – HW Version plugin fixed due to string output # V 6.4 – Added a 00 plugin and VeryLastPlugin for vCenter connection info to separate the report entirely from VMware if needed. # V 6.3 – Changed the format of each Plugin so you can include a count for each header and altered plugin layout for each plugin. # V 6.2 – Added Time to Run section based on TimeToBuild by Frederic Martin # V 6.1 – Bug fixes, filter for ps1 files only in the plugins folder so other files can be kept in the plugins folder. If a plugin is not needed in your environment remove it from the plugins folder as it will speed up the execution of your script. Sorry for the delay in replying… Work 🙂 I was running PowerCLI5.0.0 I thought this was the latest version as that is the latest version on the vmware download site under vsphere 5, drivers and tools. Wrong! For those who are looking for the newer version it can be found on the communities page at http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/powercli Thanks for the new 6.15 version, it definitely works better, the issues I mentioned have all been fixed. All that is left for now is to define what vcenter rights are needed for each check. Glad you got it fixed Ron, that was puzzling me ! Figured out the problem. Seems as though someone had messed with the vCenter and Windows Server 2008 Privileges in between tests. Thx for steering me in the right direction. It’s all good now… Thx for responding, Alan. I am using the same account which is why I’m a little surprised with the output. Strange indeed. Ron, Im not sure what the issue is here, PowerCLI is just a client that hooks into the vSphere API, it shouldn’t matter where you run it from you should get the same results, are you using the same user account each time ? @Alan, hopefully 3rd time’s a charm and you’ll respond in kindness.. :-o) There appears to be a limitation when running the script from a VM (that is not the vCenter and) when there is more than 1 Datacenter. Hoping you can either confirm, deny and/ or advise if a workaround exists (if applicable). When running vCheck from the VM, I’m only getting results from the Datacenter in which the VM resides. I’d like the Report to return results from all other Datacenters listed within the target $VIServer. When running vCheck from the actual vCenter itself or another device, such as a workstation that is not being managed by vCenter, the Report contains all objects from all Datacenters. Is what I’m experiencing expected behaviour? If not, is there anything you can suggest that would need to be done to correct this anomaly? Gregory, this is a known issue with PowerCLI 5.0, VMware fixed the issue in 5.0.1 I’ve been using version 6.0 and now 6.15 of your vCheck program which gives alot of useful information. I wanted to point out to you a bug in the Get-Datastores cmdlet which returns the incorrect values for CapacityGB and FreespaceGB. This has caused your Datastores(Less than 50% free) Report to produce inaccurate results in the CapacityGB and FreespaceGB columns(no fault of yours). In fact, it appears that CapacityGB should actually be the value of FreespaceGB and vice versa. The values for CapacityMB and FreespaceMB are correct and can be used to calculate the correct GB solution by dividing by 1024MB. I hope this information is helpful to you. —- vCheck 6.15 released —- # v 6.15 – Added Category to all plugins and features to vCheckUtils script for Categorys. # v 6.14 – Fixed a bug where a plugin was resetting the $VM variable so later plugins were not working 🙁 # v 6.13 – Fixed issue with plugins 63 and 65 not using the days # v 6.12 – Changed Version to PluginVersion in each Plugin as the word Version is very hard to isolate! # v 6.11 – Fixed a copy and paste mistake and plugin issues. Paul, thanks for your comments… 03, are you running PowerCLI 5.0.1 ? there was a bug with PowerCLI 5.0, try 5.0.1 50, Thanks, you led me to a bug with a plugin where it was resetting the $VM variable and therefore any plugin after that which referenced VMs was wrong 71, A new faster version will be included in the next version. Next version will be released tonight. Your script just keeps on getting better. The new v6 script is definitely faster that the previous version (8 minutes instead of 2 hours) and the time taken information is definitely useful… I have found a few issues though. 🙁 03 Datastore information This reports the values the wrong way round. Capacity should be bigger than Free space 🙂 Confirmed values using vCenter. 50 VMs with CPU or Memory Limits Configured. For some reason not all VMs are being reported. I know I have 10 – 15 vms with mem limits but the report only gives 1. I have not yet worked out why. If I run just that test, it gives the correct results but not if run as part of all the tests. 71 Capacity plannig. Takes a long time and gives a division by 0 error. Also, it would be useful to know what priviledges should be granted to a role and at which level the role/user needs to be applied in vcenter. I want to have a local user on the vCenter server that runs the vCheck and only has priviledges to get the information. I know (now) that I need to grant the role at the top level (otherwise check 44 vkernel warnings reports nothing :), not all 43 vcenter event logs are shown, etc ) but what is the minimum priviledges needed. Maybe it would be useful to add checks within each check to ensure enough priviledges have been granted and warn if insufficient. It may be useful if the script reported when starting a test rather than just when its finished, and maybe also which test number (makes it easier to find when you want to remove it). Finally, everyone, please keep up the good work Nice job again. When I was initiating the 6.10 version I see some param questions come up 2 times # Set the number of days to show VMs created for : 7 # User exception for Snapshot created/removed [ s-veeam]: # Set the number of days to show VMs created for : 7 # User exception for Snapshot created/removed [ s-veeam]: 7 I think it is a minor issue… Hi Alan. Thanks. We replicate our servers to a DR site which run’s in 30 min a time. Does the Snapshot Information plugin only check snapshots which are kept for a long period of time. It should not take that long to check 2 days. We only have 30 VM’s. Also any update on the sendemail function on vcheck 6.0 and 6.10 Manesh, there were more changes to that plugin than just that, if you prefer the old plugin you can always use that in the new version of the script. The new plugin attempts to find the user who created it to, this takes time as it searches back through the logs. Thanks Alan. On version 6.0 have have the snapshotage set to 7 and it just runs. but on 6.10 it’s set to 2 and it just sits there. Also regarding the Sendemail we still have a issue. We had to revert the send command from version5.0 inorder for us to send the email Thanks Alan for implementing all those improvements Manesh, The next one it runs after that is the Snapshot information, this can take a long time if you have lots of snapshots. The vcheck 6.10 just sits at finshed calculation General Information for a long time. Any ideas? I have reverted back to vheck6.0
OPCFW_CODE
We are moving more and more into a “touchable” world and there are many different surfaces that can be interactive. They can be mini or big like a wall. Defining the user experience depends a lot on the context and on the size. But while touching a wall pushes you to use all your body to find the balance and to be able to interact with the object, while you are using small devices is much more difficult to find the stability to do what you need to do. In this case, defining a correct target area is vital for an application. Touch Gestures & Targets Designing for touch screens means that you have to combine software ergonomics with the physical ergonomics, because the users will assume a certain position while interacting with your app through the device. You can think to your users walking, stopped at a traffic light, travelling or just doing shopping. You design for hands and fingers: think to their position and movement while consuming the app’s content. Once you have defined your users’ profile, you can refine the number of the actions required to interact with your app, focusing on those to improve the usability. For each interaction there should be an instant and visual feedback: never let the users wonder what is going on in your app. The gestures’ library is already well known in the designer’s community and it’s quite standardized, but for every system the question is: how to choose the right gesture? The focus goes first to the content that you would like to make interactive, not to the type of gesture that you would like to apply to it. The first question is how would you like to explore this object? At this point you can apply the proper gesture. As you can see, choosing the correct gesture to interact with content is becoming part of the information’s organization. For this reason, describing interaction in the scenarios and making a paper prototype to simulate the usage are useful steps to build the UI. Defining a target area is another important interaction point. Usually we think in term of “icon’s shape”. But this is not always evident, especially on the Windows Phone, due to the chrome cleaning, you have to think in terms of “object and shape”. And this is can be a very useful exercise to clean your attitude and habit to think the target area in terms of control or icon’s space. Don’t forget to use your experience and port it in your design. For example, if you decide to reduce the target size, you know that touching that space will be much more difficult especially if you put it near to other elements, also clickable. And you already know that adding more space among the controls can solve this problem, because you experimented on yourself. This can be the list of questions that you can ask yourself when you create a clickable object: ● What is its shape? How should users react to a rounded (or rectangular) shape? ● Where is it located into the context? How the user can locate it? ● What is its function? ● How often it will be used? ● How can you prevent user’s errors and how can you create a way to recover from Answering to these questions means think how your users will interact with the application. Will your users use the app while they are moving? Or sitting on a sofa? Look to the mistaps especially for the application, like dialer, used while the users don’t have a stable touch. Starting from my experience, here I talked about one single problem in designing small touch screen, because it is the most common one with which, as a designer, you have to deal every day.
OPCFW_CODE
Today there are two current versions of Windows that are worth considering - 7 and 10. We will not consider XP, since it is pretty decently outdated. But 7 is also outdated and it is no longer possible to buy a new laptop with Windows 7 on board. Computer manufacturers immediately install Windows 10 even on very weak computers. And people are tormented with Windows 10, because the computer, in fact, does not pull it. Let's take a look at the minimum system requirements for Windows 10: That is, this is 1 or 2 gigabytes of RAM (depending on the architecture of the CPU (for very ancient 1 GB)) and processor at least 1 GHz with support for PAE, NX, SSE2 (in fact, this is any processor released for the last 5-year-old school). But as practice shows, it is almost impossible to work on a computer with a 2-core 1.1 GHz processor (for example, Celeron N). So. You need to change Windows 10 to Windows 7 in the following cases. I will try not to burden you with various terms and describe it as succinctly and simply as possible: - The computer was running Windows 7, you changed to Windows 10 and the computer began to slow down noticeably, and also have problems with the hardware drivers. - Your PC has a processor with a frequency of less than 2 GHz. Here, as it were, it is worthwhile to understand that of course "Windows" will run on a single-core CPU, but only the processor's power will be enough to run it. No more. - You have 1 or 2 gigabytes of RAM, but there is no way to add. Yes, Windows 10 will work even on 1 gigabyte, but modern programs sometimes require more. If there are 7, then at least the programs will be easier to work with; - You do not have a solid state hard drive (SSD) and cannot install one. The difference in how Windows 10 works with and without an SSD is great - I can't even imagine how it would work with a regular hard drive. Older versions of Windows do not have such problems and they work smartly with the HDD. Let me remind you that an SSD drive is many times faster than an HDD. - You don't like Windows 10 for some reason. But it should be understood that 7 will be more vulnerable to hacking and attacks than 10. Also, hardware manufacturers for computers no longer support Windows 7 and do not provide drivers for this system. I personally encountered such a problem on an external sound card - in Windows 10 the driver works fine, in Windows 7 the native driver does not installed (error), and the driver that picked up the 7 has a rudiment: when listening to music, a strong "digital noise" is heard. It should be understood that when installing Windows 7 on new hardware, various compatibility problems can come out: in this case I recommend using the English-speaking forums on the Internet - most likely, your problem has already been solved by someone from lovers. Unfortunately, it is impossible to list all the possible problems. What if you don't want Windows 7? I advise you to look towards the various Linux assemblies. It's exciting and interesting - you can customize your computer for yourself and get an interesting experience with an advanced operating system that is not designed for a wide audience. Well, I seem to have managed to tell you. Thanks for reading! My site | My YouTube channel | Click "Like" if it was useful and interesting!
OPCFW_CODE
Uploading PDFs to Google Drive Using the Google Drive Integration for Fillable PDFs, your PDFs can be sent to your Google Drive after a PDF has been generated via Fillable PDFs. Note: The Google Drive integration is only available to active Professional and Agency tier Fillable PDFs license holders. Authenticating with Google Drive There is no additional plugin to install to activate the Google Drive Integration, but you do need to initially authenticate Fillable PDFs with your Google account to get started. To authenticate, navigate to the Forms > Fillable PDFs page in your WordPress admin and the navigate to the Integrations settings tab. In the list of available integrations, find Google Drive and click the Connect to Google Drive button. Note: The Google Drive integration is only available to active Professional and Agency tier Fillable PDFs license holders. Personal license holders will instead be given a button to upgrade to a Professional license in order to use the integration. Complete the authentication with Google following the on-screen prompts that are served to you. Once you’ve completed the process successfully you’ll be redirected back to the global Fillable PDFs settings page, if all went well you should now see a Disconnect from Google Drive button for the Google Drive Integration in the Integrations settings tab. Setting Up the Google Drive Integration Now that you’ve successfully authenticated with Google in your Fillable PDFs settings, you’ll need to set up your Fillable PDFs feed(s) to enable the upload to Google Drive and specify where the generated PDFs will be stored in your Google Drive. To do this, navigate to Forms > FORM NAME > Settings > Fillable PDFs page in your WordPress admin and either add or edit a Fillable PDFs feed. Navigating to the Integrations settings tab will display any Integrations you’ve set up allowing you to toggle those on for the feed. Setting the Google Drive toggle in this list to the enabled state will pop out a few new settings: This is the only required setting for the integration and it specifies where in your Google Drive the feed will attempt to write files to. Keep in mind this is relative to the user you’ve authenticated the integration with, so the upload root should always be the Google Drive’s root for the authenticated Google user. Merge tags are also supported in this setting which allows you to feed in dynamic data from the entry tied to the PDF to be uploaded. You could for instance use this to create a directory for every entry feeding in data like the entry id, a name, etc. to the directory name. Alternate File Name This allows you to optionally override the Output File Name setting in the General settings tab of the feed for the generated PDF in case you’d like the file name of the PDF uploaded to Google Drive to differ from what the version initially generated was named. Merge tags are also supported here just like the normal file name setting for the feed. Important Note Regarding File Names If there is already a file with the same name in the same location the integration is writing to in Google Drive, that existing file will always be overwritten by the new file. If you’re worried about the integration overwriting existing files already stored in your Google Drive, make sure you’re setting something dynamic from a merge tag in either the file name setting(s) or in the destination folder setting to make sure a file with a unique name or location is always being written.
OPCFW_CODE
Running SSIS packages continuously without scheduling No more Batch ETL A few weeks ago I wrote a post about the concept of having continuous execution of ETL individual processes to achieve ‘eventual consistency‘. In that post I made the case to step away from ‘Batch’ execution of ETLs, where related processes are executed as a mini workflow, in favour if fully independent execution of the individual (modular) ETL processes. I have spend some time developing this concept in SQL Server using my VEDW Data Vault sample set and wanted to share some techniques how to achieve this. It works even better than expected, if you have a SQL Server environment it’s worth checking out the code below. But the concepts are applicable for various architectures of course. As a brief reminder, the ‘eventual consistency’ approach aims to remove all dependencies in loading data by allowing each individual ETL process (i.e. Hub, Link, Satellite) to run whenever possible – in a queue of sorts. Enabling this requires a pattern change, for instance in the Data Vault example used here the ETLs needed to be updated (=regenerated) to load from the Persistent Staging Area (PSA) using their own self-managed load window. This load window is set for every unique execution based on the committed data in the PSA (based on the ETL control framework, not the RDBMS). The load windows between individual ETLs differ slightly as each ETL process runs at slightly different time, but this is no problem since you can derive the last moment consistency (Referential Integrity) was achieved using the control framework metadata. This information can be used for loading data into the next layer. The setup – two continuous queues What I want to achieve for this test is to have two separate ‘queues’ running continuously: - One queue that loads new data (delta) into the Staging / Persistent Staging Area, and - One that loads data into the Data Vault from the Persistent Staging Area. As outlined above there are no dependencies managed between processes – the ETLs are required to ‘just work’ every time they are executed. To up the ante a bit more, I want the first queue (PSA) to run more frequently and using more resources than the Data Vault queue. I have tested this concept using SQL Server and SQL Server Integration Services (SSIS), and have generated the VEDW sample set as physical SSIS ETL packages (using Biml, of course). The generated packages have been moved (deployed) to an Integration Services Catalog (the default SSISDB was used). The result looks like this: This screenshot shows a subset of the deployed packages in the SSIS catalog. In the ETL control framework I use (DIRECT) the workflows start with ‘b_’ (for ‘batch’) and the individual ETLs start with ‘m_’. (for ‘modular’). For all intents and purposes I have generated the ‘batch’ ones to run the Staging and Persistent Staging steps in one go, but arguably you can load into the PSA directly. To keep things simple I will create one queue for the ‘batch’ packages (containing Staging and Persistent Staging), and another queue for all the Data Vault objects (prefixed by ‘m_200’). Setting up a continuously running process queue for SSIS Running something continuously, over and over again – almost as an ETL service, in SSIS isn’t straightforward. There is no out of the box option for this, but there is a trick to make this work (unless you want to invest in the message broker). An easy way to achieve this is to create a job in SQL Server Agent containing a single T-SQL step. The job can be configured to start when the server starts, it will keep running regardless as soon as you start it. The T-SQL step is where the real process handling takes place and where the polling is implemented. The idea is to create an endless loop that: - Checks if new ETLs can start based on parameters, and wait if this is not the case. The number of concurrent ETL executions is the parameter used in this example. If this is less than 3, the next ETL can be started according to a priority list (the queue). If there are already 3 (or more) ETLs running there will be a 30 second wait before new attempts are made. - Execute the SSIS package that is next in line directly from the SSIS catalog using the corresponding T-SQL commands. The priority order is set by the queue, but in this example organised by last execution time. The jobs that haven’t been running for the longest will be prioritised in the queue. - Handles exceptions to avoid the SQL statement to fail. This is implemented using a TRY…CATCH block that deactivates the ETLs in the control framework if there is an issue so they won’t be attempted again unless reactivated. An example use-case is when the queue attempts to execute a package which is not available in the package catalog (hasn’t been deployed). Thanks Luciano Machado for the WHILE 1=1 idea! The logic is as follows: -- Create a temporary procedure to act as parameter input, i.e. calculate the number of active ETLs CREATE PROCEDURE #runningJobs @NUM_JOBS int OUTPUT AS ( SELECT @NUM_JOBS = (SELECT COUNT(*) FROM <ETL control framework> WHERE <execution status is 'running'>) ) GO DECLARE @MAX_CONCURRENCY INT DECLARE @NUM_RUNNING_JOBS INT DECLARE @DELAY_TIME VARCHAR(8) DECLARE @JOBNAME as VARCHAR(256) DECLARE @CURRENT_TIME VARCHAR(19) SELECT @MAX_CONCURRENCY = 3 SELECT @DELAY_TIME ='00:00:30' -- This is the time the queue waits upon detecting concurrency WHILE 1 = 1 BEGIN EXEC #runningJobs @NUM_RUNNING_JOBS OUTPUT --Whenever the number of jobs exceeds the parameter, wait for a bit (as per the delay time) WHILE (@NUM_RUNNING_JOBS >= @MAX_CONCURRENCY) BEGIN WAITFOR DELAY @DELAY_TIME EXEC #runningJobs @NUM_RUNNING_JOBS OUTPUT END -- When a spot becomes available, run the next ETL(s) from the queue SELECT TOP 1 @JOBNAME = ETL_PROCESS_NAME FROM ( -- Select the Module that hasn't run the longest (oldest age) SELECT * FROM <the queue> ) QUEUE ORDER BY <latest execution datetime> ASC BEGIN TRY -- Execute the ETL Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name=@JOBNAME, @execution_id=@execution_id OUTPUT, @folder_name=N'EDW', @project_name=N'Enterprise_Data_Warehouse', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id, @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 EXEC [SSISDB].[catalog].[start_execution] @execution_id END TRY BEGIN CATCH <do something i.e. disable ETL in queue, send email etc.> END CATCH WAITFOR DELAY '00:00:05' -- A delayer to throttle execution. A minimum delay (1 second) is required to allow the systems to administer ETL status properly. END DROP PROCEDURE #runningJobs When this is started, either directly as SQL statement or as part of a SQL Agent Job (T-SQL step) the process will keep on running until stopped. Organising the queue What about the queue itself? In its simplest form this can be a view that lists out the ETL packages that need to be executed, as long as the name corresponds with the name of the object in the SSIS catalog. At the very least the .dtsx suffix needs to be added as this is how package files are stored in SSIS. The view I have used for this queries the ETL object names from the ETL control framework, as they need to be declared anyway for the control wrapper to work. In other words, the ETL names are already there. All I need is to select the most recent execution instance for each ETL I want to be in scope, so this can be listed in ascending order. This will force the ETL process that hasn’t run the longest will be put on top of the queue. It becomes really easy to set up various queues as all it takes is a T-SQL statement and corresponding view (or other object). Creating the 2nd queue was a matter of seconds and in similar fashion a 3rd Data Mart queue can be configured. When executing the queues you can see the ETL process executions happening in the ETL control framework. I specifically started the Data Vault queue first to confirm no data would be loaded, which makes sense because the PSA was still empty. After a few minutes I started the Staging / Persistent Staging queue, and one by one (three at a time really due to the concurrency setting) the PSA tables were being populated. At the same time the Data Vault queue processes started picking up the delta, as soon as the PSA process for a specific table was completed succesfully. With the queue being a view you can monitor the order change while processes are executed. An ETL process that was top off the list moves back to the bottom, and slowly makes its way back up again, as shown in the following screenshot: It all works really well and after a while Referential Integrity was achieved. Also, the results were 100% the same as they were in the VEDW and Batch approaches. Making changes in the data were also picked up and propagated without any problems. The number of currently executing ETLs as used in the example here is a fairly crude mechanism. But, it is clear to see that this can easily be adjusted to more sophisticated resource management parameters such as CPU or memory usage. While I didn’t implement this for the example here, a queue should also have ways to validate completeness of ETL processes. This is relevant because previously the internal dependencies where safeguarded in the batch style mini workflows, but since the batch is gone you need other ways to make sure all required ETL processes are present. The easiest way to apply checks like these is to validate if every Link or Satellite has corresponding Hubs relative to the (shared) source table. The same applies to Link-Satellites of course, which needs its supporting Link and it’s Hub ETLs to be present somewhere in the queue. You need to prevent having, say, a Satellite that is loaded from a specific source table without a Hub process that loads from the same table. This is nothing new – the same rules apply and the required metadata is already available. It’s just that enforcing these rules is slightly different in a queue. ‘The queue’ is a good concept and works really well. If you have a PSA (why wouldn’t you?) you may want to give it a go as the results surpassed my expectation. As a nice side effect, it also makes re-inialisation super easy. All you need to truncate your control table (or at least the relevant records) and the system does the rest to reload deterministically. Copying data is not needed anymore, and you can’t even make a mistake here because the patterns can re-process already loaded data without failure. On top of this it also natively handles graceful completion of the ETL control framework wrapper because stopping the job doesn’t kill the SSIS package execution, it just prevents new processes from spawning. This means you can even put it on a schedule if you want the queue to operate only limited amounts of time. Win-win! This is an example of how ‘eventual consistency’ can be implemented using SQL Server, and I recommend looking into it.
OPCFW_CODE
We actively monitor change covering more than 150 key elements of life. Halcyon curates the most significant complexity-related content from carefully selected sources. Please contact us if you'd like our help with complexity-related challenges. The key lesson of complexity is that a lack of constraints can only be chaotic, whereas over constraint leads to catastrophic failure - Dave Snowden, http://www.cognitive-edge.com/blogs/dave/2010/05/tommy_can_you_hear_me… The complexity of the present world is shattering expectations in every arena, most especially, in the geography of the soul. Lost as we all are, we can understand why some retreat into fundamentalisms that provide archaic certainties, holding houses of containment before the onrush of new realities. Others wander in a spiritual void, overwhelmed by the loss of all pattern, looking to material accomplishments to replace the loss of essence. Still others flee into "replacement strategies"-- psychotherapy, drugs, sex, growth seminars, travel - Jean Houston http://www.dailygrail.com/Religion-and-Spirituality/2010/3/The-Future-G… One of the problems of taking things apart and seeing how they work - supposing you're trying to find out how a cat works--you take that cat apart to see how it works, what you've got in your hands is a non-working cat. The cat wasn't a sort of clunky mechanism that was susceptible to our available tools of analysis - Douglas Adams, Hitchhiker's Guide to the Galaxy. Most of us think of our DNA as sort of locked in our body, waiting to be passed on to our children, but in fact your DNA at every moment is interacting with your environment, interacting with every bite of food you take, interacting with your thoughts, your feelings, and various things, so when you take a bite of food, literally, the information -- beyond the calories in the food -- goes right into your cells, into your DNA, and switches on genes, or turns off genes based on what information is in that food - Dr. Mark Hyman Man's mind cannot grasp the causes of events in their completeness, but the desire to find those causes is implanted in man's soul. And without considering the multiplicity and complexity of the conditions any one of which taken separately may seem to be the cause, he snatches at the first approximation to a cause that seems to him intelligible and says: "This is the cause!" - Nikolai Tolstoy, War and Peace Complexity thinking in contrast focuses not on closing the gap to some ideal future but on describing the present and making small changes now, in order to evolve to some future state which could not be anticipated but which is more stable and resilient than any idea - Dave Snowden http://cognitive-edge.com/blog/entry/5811/all-is-vanity-nothing-is-fair/ Categorisation is dangerous as you miss out the fuzzy bits between - Dave Snowden, http://www.cognitive-edge.com/blogs/dave/2010/03/km_hong_kong.php The UN is a new form of evolution which should occupy the attention of the best minds of this planet. It is a comprehensive evolving network: a system of innumerable, interacting sub-systems, from nations to individuals, from the infinitely large to the infinitely small, within an ever-expanding time framework. It is an entirely new, fundamental biological phenomenon which will wipe out all former political science. We need more biologists as political leaders - Robert Muller
OPCFW_CODE
There is a story flying around, thanks to Engadget, that claims “Bernd Marienfeldt and fellow security guru Jim Herbeck” have “discovered” a security issue in the iPhone. Do they really have to use “guru” as their title? It immediately gives me doubt about the sophistication of the story, but I digress… They say the issue is that if you connect an iPhone, using the phone’s USB cable, to a computer running Ubuntu 10.04 the phone is mounted and accessible. The phone can be locked, the phone can be encrypted, but the Linux system will still mount the phone and provide open access to its filesystem. First, some would say this is clearly a security hole because Apple has “tested and confirmed” that it is one. They bank their argument on the word “confirmed”. Apple also has stated it has no fix and does not plan on having one. This latter statement is what I would like to call your attention to today. Note that the ability to mount an iPhone in Linux and natively access its files has been a public project under development since 2007. The alpha code was released as iFuse in early 2009 and tested by many people. Towards the end of 2009 iFuse became libimobiledevice and it was so successful several major distributions have included it in their packages: - openSUSE openSUSE 11.0, 11.1, 11.2, Factory - Fedora Fedora 12+ (Packages in the official repositories) - Mandriva Mandriva (Packages are available in “Cooker”) - Ubuntu (Karmic), Ubuntu Lucid (Packages in the official repositories) It is from this context that I find it a bit odd that “security gurus” have tried to claim “discovery” of this functionality and brand it a flaw. While the passcode has never prevented me from mounting the iPhone in Linux the libmobiledevice project says this has not been their experience. 27.05.2010: Some security sites report that even passcode enabled devices get auto-mounted. We could not reproduce this yet. However it might point at some bug during boot in the iPhone OS. Accessing a passcode enabled device the first time does not work in our tests as one would expect. Devices taking more time booting might be affected though, on any OS. Maybe there is an intermittent issue here, but I am able to reproduce it on all my Linux systems. In fact this is the only way it has worked for me over many months. I considered the iPhone insecure for this as well as many other reasons. That is why I believe the real issue here boils down to whether you consider the iPhone a secure device or not. Do you? If you are in the camp that thinks the iPhone can be a secure device, then once again you are in for a surprise. It is not, and this is definitely not the first time this has been discussed openly. Anyone with a computer and a Linux CD has been able to access everything on your phone for over a year. Moreover, there has been a rash of attacks that target people who are actively using the iPhone. Thieves know that if they get the phone away from the owner when it is not locked they have easy access to the data; owners should know this too. If you are in the camp that does not think the iPhone can be a secure device…you are right. In fact, you might even work for Apple and be one of the people who said “we have no fix and no plan to make a fix” to any number of the control points for data confidentiality. In other words the absence of a plan to make a fix says to me that Apple does not see this as a serious flaw, let a lone a flaw at all. They perhaps just confirmed that Linux is able to read the filesystem properly in the same way that a thief who grabs the phone can use apps and access data. Here is what anyone who plugs an iPhone into a Linux computer can see: 1) Plug the iPhone into the computer using the Apple USB cable that comes with the phone. You will see it appear as a mounted filesystem (Apple File System or AFS) on the desktop. 2) Then you will be prompted with two dialog boxes, one for music and one for photos: 3) You can choose to browse the filesystem from those dialog boxes, instead of opening applications to manage music and photos. Or you can cancel the dialog and just open the filesystem to browse from the desktop. Either way, full access to unencrypted data without needing to know the PIN. Surprised? I downloaded and installed Ubuntu 10.04 the day it was released and the iPhone has always appeared this way to me. It did not seem to me that my data was any more exposed than I had already thought. Perhaps I am giving the Apple team too much leeway when I say there is no new issue here and no fix needed, but I also do not think anyone should have seriously considered the iPhone to be a secure or safe device. It is highly unsafe at any speed. News? Even in a physical security review I immediately found it designed to be incredibly fragile and prone to disaster. At least once a week I see a twitter from someone about an iPhone failure. Not news. Perhaps for the same reason Apple put the infamous and unreliable sensitive water sensors in the iPhone that void your warranty when triggered, no one should operate one under the assumption that this device is designed to protect data without significant outside controls and enhancements. Giant foam cases, screen covers, vacuum sealed bags…the list goes on and on for things to buy to protect the phone. None of it seems to be from Apple. Likewise we have known for years that proper encryption and authentication for the filesystem is something you will not be getting when you purchase an iPhone. I do not feel knowing this about Apple products can really be called enlightenment. The syllable gu means shadows The syllable ru, he who disperses them, Because of the power to disperse darkness the guru is thus named.
OPCFW_CODE
Accordions: Eight patterns in Civi markup – reduce & make more accessible? At present Civi docs recommends making an accordion interface element with .crm-accordion-wrapper. There's at least another seven ways accordions are implemented in Civi, as seen in #56. Some of these look/behave differently, and maybe have to be different, but perhaps some could be merged. Furthermore, none of these use the basic aria-labels recommended in the Bootstrap3 method (which remains in Bootstrap5), for letting screen-readers know if an accordion is open or closed, and which contents should be read or not. This is a long-standing issue - core#3294 - raised by the late accesibility expert Rachel Olivero (the Olivero theme is named after her). Sidenote: Bootstrap's accordion pattern ( .collapse) doesn't seem to be used anywhere in Civi. can we reduce the number of accordion markup patterns that a theme must support from 7 to anything less? can we amend the recommend accordion markup (and in turn civi's accordion implementations) to be more accessible (+ modern/responsive)? .crm-accordion-wrapper- as recommended in the docs UI reference guide .crm-dashlet-header- as seen on the home dashboard. Looks the same but only the expand icon (e.g. arrow) is clickable as the rest of the header needs to be a drag-gable region. .crm-collapsible- notably used on the field groups on the contact record main tab - it has no background for the header or body .crm-collapsibleon a fieldset - as seen on event signup pages .collapsedin a table - as seen on the extensions listing page - using the .crm-ui-accordionangular directive. This is a shorthand to generate the .crm-accordion-wrappertype accordion top. .af-collapsibleon a FormBuilder element - as seen in forms generated by Form Builder - a new JS/CSS/markup pattern added in 2022 - there's another Jquery/Angular fieldset accordion used in Searchkit builder that's not in ThemeTest yet. Range of characteristics These eight variations cover 5 different visual/interaction patterns: - click on full header to expand/close - click only on expand/close icon to expand/close - shaded background for header and body - transparent background for header and body - expand a region outside of the parent accordion wrapper (5 - maybe soon to be replaced with SK/FB) Initial thought is that with a bit of extra css and a tiny bit of rewriting, all of these could be done with two patterns - one based on the current recommended wrapper > header + body, with the full header clickable and the icon added entirely in css; and one based on an icon wrapped in an tag that toggles the visibility of another region, as seen in patterns 2 and 5 above (while 5 may soon be replaced and 2 is one-off).. - A utility class like crm-accordion-clearcould be applied next to .crm-accordion-wrapperto provide 1) above with the same UI as 3) - Likewise two new CSS selectors could allow the same pattern to be applied to 4) & 8) . Both these changes are demonstrated in a new ThemeTest branch: Proposals - I'm not sure why in 7) + 8) the new SK/FB Angular accordions diverged in markup/js/css from Civi's recommend accordion - maybe needs @colemanw to feed in. - The extensions page 5) will be rewritten with FB/SK at some point so can maybe ignore. - That leaves 2) - an accordion where only the icon triggers an expand/close so the rest the header is a dragable region. I think that making this use the same markup/css pattern as the others would require rewriting all the others, so it might be safest to leave this an exception for now. There is an attempt to merge the patterns in Proposals but the header is doing a lot - dragging and right-floated icons. A toggle behavior is more compelling than an accordion, even if it should look like an accordion. - But even with keeping 2), then potentially only 2 patterns left out of 8. Thoughts on accessibility Work has been done to implement JQuery Accessible Accordion Aria into Civi (demo of the plugin - https://a11y.nicolas-hoffmann.net/accordion/). Not sure if that is still a promising path, or there's a JQuery free methods now. The current recommended pattern could be made more accessible with something like the following, based on Bootstrap3's collapse. There's fewer ARIA labels than the JQuery plugin, but it makes clear if the accordion is open or not, and links the header to the expand region with an id, two recommendations of this walkthru on accessible accordions. <div class="crm-accordion-wrapper collapsed"> <div class="crm-accordion-header" aria-expanded="false" aria-controls="uniqueID"> Accordion Title here <div class="crm-accordion-body" id="uniqueID"> <div class="crm-block crm-form-block crm-form-title-here-form-block"> Accordion Body here This would require two changes - a unique ID added to every unique accordion on a page, and an aria-expanded="false" attribute that toggles true/false. - why did the new angular accordions take another approach? Speed of development or something practical? - is it worth trying to reduce these down to two recommended patterns, by swapping out the new angular patterns and adding a few new css classes into core? Or better to wait and flag this for the future? - what's the easiest way to make civi accordions more screen reader friendly?
OPCFW_CODE
[Question] What's the current status of MIDI playback or SoundFont2 support? Hi, first of all thanks for the tiny but great library! I've already used this library in one of my side projects. My intended usage is to play simple MIDI files that vgmtrans extracted and it works great so far. Then I poked around and tried to load some MIDI files which are more complex, and noticed some noticeable differences between TinySoundFont, FluidSynth and Timidity++. I'll attach the MIDI file for testing below, my question is: What's the current status of MIDI playback / SoundFont2 support? I saw an Issue thread (https://github.com/schellingb/TinySoundFont/issues/14#issuecomment-371381296) which is related to this question, but I'm not sure if it's up-to-date. The midi file is: striving.zip, which is downloaded from here, the SoundFont that I used to test this MIDI file is Reality_GMGS_falcomod.sf2 which can also be downloaded from here. Thanks again! Hey there! The comment you have linked is still up-to-date. The linked soundfont uses both mentioned generators ChorusEffectsSend and ReverbEffectsSend which are not yet implemented. It also defines modulators, but I can't easily tell if the modulators are just the default ones (which TSF adheres to) or if they are customized. This can make the generated output sound wrong by a little or a lot, depending on how much the soundfont relies on these features. The generators can be added to the library if we have an effect implementation that makes sense for us (not too much code, not too much memory required, not too much processing required, no license limitations). But modulators? I think it would need a large rewrite to make every generator be controllable by modulators. It would no doubt increase the performance requirements which could make it more problematic to be used for lower end platforms (weaker ARM devices or embedded platforms). I wouldn't even know where to start implementing modulators. As much as I would love to have support for them :-) Thanks for the quick response! The comment you have linked is still up-to-date. Just want to confirm, are the "MIDI playback" part also up-to-date? i.e. MIDI playback: Besides regular instrument selection, note on and off, we do have support for pitchwheel, pan and volume messages. Which are the most common ones. I think the biggest missing thing is support for switches like sustain, portamento, sostenuto, soft pedal and legato. I have not evaluated how hard implementation of these would be. While trying to test the result, I am using QMidiPlayer which is FluidSynth-based, and it allow user manually turn the Chorus and Reverb effect off. It still have noticeable difference when using the same SoundFont and Chorus and Reverb are both turned off. For convenience, here are a clip so you can hear the difference (the attachment is actually a m4a file, but GitHub won't allow me to upload m4a file directly, so I change the file suffix to mp4 instead): https://github.com/schellingb/TinySoundFont/assets/10095765/96acc3eb-e329-4b1b-bfec-715361c4f233 The first half is rendered by the FluidSynth-based QMidiPlayer (with Chorus and Reverb disabled), the second half is rendered by my TinySoundFont and TinyMidiLoader-based MIDI player. It sounds like sustain, soft pedal and/or legato is missing in the later one, but I'm not quite sure about it. Loading the MIDI in https://signal.vercel.app/edit does show usage of the "Hold Pedal" controller which I think according to https://anotherproducer.com/online-tools-for-musicians/midi-cc-list/ is MIDI CC 64 which in TinyMidiLoader is TML_SUSTAIN_SWITCH. TML reads the control messages fine, it's just that when passed to tsf_channel_midi_control it won't do anything with unsupported controllers. All supported there is 0, 6, 7, 10, 11, 32, 38, 39, 42, 43, 98, 99, 100, 101, 120, 121, 123. This hasn't changed for many years so yeah, none of these pedal controllers are handled. Some sound fairly easy to implement though, certainly easier than adding support for chorus, reverb or modulators :-) Oh, it seems indeed caused by CC 64. Now I have a good starting point to know where to look at :) Thanks for the info and thanks again for the great library! I'll close this issue now :D Would like to link it here so it's easier to find: https://github.com/schellingb/TinySoundFont/pull/88 this PR works quite well to address the CC64 issue mentioned above :) It would also seem that MIDI CC1 is not supported in TinySoundFont, as it should by default activate the vibrato LFO.
GITHUB_ARCHIVE
The Website Content Accessibility Guidelines (WCAG) 2.0 “is a stable, referenceable technical standard. It has 12 guidelines that are organized under 4 principles: perceivable (Information and user interface components must be presentable to users in ways they can perceive), operable (User interface components and navigation must be operable), understandable (Information and the operation of user interface must be understandable), and robust (Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies). For each guideline, there are testable success criteria, which are at three levels: A, AA, and AAA” (which I will discuss further shortly). Some initial responses to WCAG 2.0 when it first rolled out revealed the difficult and counterproductive nature of the guidelines. For example, Joe Clark (2006), a web accessibility expert and writer, complained that WCAG 2.0 is unreasonable and that “the fundamentals of WCAG 2 are nearly impossible for a working standards-compliant developer to understand [...] you as a working standards-compliant developer are going to find it next to impossible to implement WCAG 2.” Conway (2016), writing 10 years later, expressed that “the guidelines themselves are not always specific.” Fortunately, the “Techniques for WCAG 2.0” document is updated every two years to offer current practices and some help with meeting the guidelines, in addition to the “How to Meet WCAG 2.0” and “Understanding WCAG 2.0” documents. Because the guidelines can be so vague, there is a lot of room for individual interpretation, so it is highly recommended for web content developers to refer to the “Techniques” document for a starting place. As Bradley 2015 mentions, “[y]ou can find the techniques in the same documents as the success criteria. They’re included as a collection of links to yet more pages with yet more details” (Bradley 2015). However, it is important to know that none of these techniques are required. As long as the success criteria are met, it does not so much matter what techniques and methods were used to meet them (Bradley 2015). Besides understanding the guidelines themselves and how to achieve them, developers run into difficulty with meeting the WCAG criteria due to limited resources. Both meeting criteria, especially those of Conformance Level AAA, and testing accessibility according to WCAG 2.0 can be very difficult and intensive, in time, cost, and labor. For example, Guideline 1.2.6 requires that “[s]ign language interpretation is provided for all prerecorded audio content in synchronized media.” This is a Level AAA criteria and may not be realistically achieved by web content developers without large budgets or access to resources, such as a sign language interpreter. Further, as Conway (2016) explains, “[t]esting against WCAG guidelines can be challenging. Websites must be tested using a variety of different operating systems, versions and types of browsers.” If say, a small web developer, does not have access to various operating systems and devices, it will be impossible to to ensure that his/her website is universally WCAG 2.0 compliant. One way to approach this challenge is to start with a conformance level that is practically achievable. According to Bradley (2015) “The spec notes that some content can’t meet all the criteria for AAA compliance and doesn’t recommend level AAA for all sites. Level AA is really the sweet spot . . . Before you spend too much time wading through all the details of a particular success criteria, you’ll want to decide which level of conformance you’re aiming for and then focus on the criteria and techniques that are the same level or lower.” The Conformance levels are cumulative, which means that conformance at a higher level indicates conformance at lower levels, i.e. conformance to Level AA necessarily implies conformance to Level A (General Services Administration n.d.). As outlined by the U. S. General Services Administration, the “WCAG 2.0 Success Criteria are categorized according to three levels providing successively greater degrees of accessibility: Conformance Level AA does, indeed, seem to be the “sweet spot.” It has been “proposed as the new standard for the anticipated refresh of the Access Board Standard for Section 508. The WCAG document does not recommend that Level AAA conformance be required as a general policy because it is not possible to satisfy all Level AAA success criteria for some content.” (General Services Administration n.d.) Avila, Jonathan. “ACCESSIBILITY CONFORMANCE LEVELS: REDUCING RISK.” Level Access, August 14m 2017, https://www.levelaccess.com/accessibility-conformance-levels-reducing-risk/. Bradley, Steven. “WCAG 2.0 — Criteria And Techniques For Successful Accessibility Conformance.” Vanseo Design. June 29, 2015, http://vanseodesign.com/web-design/wcag-success-criteria-techniques/. Clark, Joe. “To Hell with WCAG 2.” A List Apart, May 23 2006, https://alistapart.com/article/tohellwithwcag2 Conway, Ash.”The challenges of WCAG 2.0 Compliance.” Bugwolf.com, November 8, 2016. https://bugwolf.com/blog/the-challenges-of-wcag-2-0-compliance. General Services Administration. United States. “WCAG 2.0 Conformance.” section508.gov, n.d., https://www.section508.gov/content/build/website-accessibility-improvement/WCAG-conformance.
OPCFW_CODE
Discussion in 'NOD32 version 2 Forum' started by Darrin, Mar 16, 2006. Anyone here just use windows firewall? If not why? You mean just the firewall and no other anti software? You would not be protected! even MS realise that and provide free software like defender. The firewall will not protect you from viri and trojans as it has no signature base and does not scan downloads just blocks ports My fault I didnt clarify. I mean still using nod and also antispyware but for a firewall just using windows firewall in xp. Is it just as good at blocking inbound attacks? I use the standard Windows Firewall and I understand it is good enough for the inbound protection. I'm behind a router and that combined with the Windows Firewall does the trick for me......, inbound at least! Besides this I use Nod32 and some anti spyware programs. I'm using just NOD32 and Windows XP SP2 firewall. Works great with minimum overhead on both ends. I think the key is to be behind a router with NAT. If you are in that situation, then the windows firewall with NOD32 and some other freebies will server you well. I agree. I've been using NOD32 in conjunction with the Windows Firewall for quite a while now and I couldn't be happier. One thing I'd suggest though is to have something like TCPView handy to monitor your connections. I do this just to keep an eye on things. I always have networks I setup behind routers that do NAT, I don't like software firewalls. However...on these networks, I do leave the XP firewall enabled, with file 'n print sharing set as an exception, as well as needed services like remote desktop, etc if needed. As I feel that helps protect the workstations from possible infestations that can spread across networks...say if one computer on the network gets infected...it won't spread as easily. Another good little thing is: Firelog XP. For logging all traffic through Win XP standard firewall. You'll be amazed what's going on! XP pro firewalled + NOD32 I am behind a NAT router as well with SPI out. I used to use ZA before, but ditched it after getting the NAT router as nothing showed up in ZA's log at all. This leads to the obvious question, Do I need the new (sometime) V3 suite. From my understanding my verizon modem is a router also. Would that be sufficient along with XP firewall Sp2? If your PC has a private IP address (192.168.xxx.xxx)...then yes, it's probably a Westel unit you have from Verizon. If you have a stand alone PC, and are behind a NAT router....then really the WinXP firewall is not needed at all. As nothing is going to get past the NAT of a router from the outside...period, unless you intentionally DMZ your computer or open/forward a whole bunch of ports on the router. So if stand alone behind a NAT router...you don't need a second firewall to check incoming...as there's nothing that will get past the NAT of the router. If you have a bunch of PCs on the network..then I leave it on to protect from other PCs in case they get infected. That's really the only advantage. However if you just have a bridge/modem from your ISP, and your PC is obtaining a public IP from them...then yes, by all means...by gosh get some sort of firewall up, after you clean your PC. Only takes a few seconds of having a public IP address to get infected. I have a westell Model 327W. I'll keep windows firewall enabled. I know what you mean about only taking a few seconds to get infected. My previous ISP years ago I had a basic DSL modem and before sp2. I did a fresh install of windows but left my internet connection on and all plugged in. Within a few seconds before I had a chance to enable a firewall I was hit. Separate names with a comma.
OPCFW_CODE
Greedy algorithm for dividing a region of numbers into partitions with the most equal sum? Say we have a number sequence, and we have X sections to divide this into: 1, 4, 7, 9, 3, 11, 10 If we had 3 sections, the optimal answer would be: [1, 4, 7][9, 3][11, 10] or [1, 4, 7, 9][3, 11][10] Since the largest sum = 21. This is the best case. (I think, I did it by hand). We want each section to be as equal as possible. How can this be done? My first attempt at an algorithm was to find the highest X values (9, 11, 10), and base the regions off of that. That does not work in an example like below, since one of the regions will NOT contain one of the highest values in the set: 3, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, Again with X=3 sections, optimal answer: [3, 2][2, 1, 1, 1][1, 1, 1, 1, 1] Of course I could brute force every possible combination, but there is a much more efficient way of doing this. But I'm having trouble finding it. This sounds like the Linear Partition problem. Here's a solution. This is exactly the linear partition problem as @rta3tayt32yaa32y says. If you really mean a greedy algorithm as your subject says, which will be an approximation, then just divide the sum of elements by X. Call this D. Maintain an accumulated sum as you work from beginning to end of the list. Every time the sum reaches D or more, make the previous elements a section, delete them, and restart the sum. If the sequence is 1, 4, 7, 9, 3, 11, 10, the sum is 45. For X=3, we have D=15. So the first section is [1, 4, 7] because adding 9 would make the sum 21. The next is [9, 3] because adding 11 would make the sum 22. This leaves [11,10] for the last. To find the exact answer, you need a dynamic program. This has already been extensively discussed here so I won't repeat that. The question is rather confused, but the answer of @Óscar López is very good. I would try to first start by grouping all the numbers into subgroups of equal size rather than sum. Now you can go through and poll all these subgroups' sums, and calculate the average subgroup sum. You should now iterate through each subgroup and perform the following manipulations to the subgroup: If the subgroup's sum is LESS than the average AND the sum of subgroup to its right is GREATER than average, check if one element from the right subgroup can be moved to the left subgroup. (I'll elaborate on 'checking' later) If the subgroup's sum is LESS than the average and the sum of subgroup to its right is LESS than average, do nothing and move on to the next subgroup If the subgroup's sum is GREATER than the average and the sum of subgroup to its right is LESS than average, check if one element from the subgroup can be moved to the subgroup on its right. If the subgroup's sum is GREATER than the average and the sum of subgroup to its right is also GREATER than average, do nothing and move on to the next subgroup. What do I mean by 'check' if you can move an element? An element should only be moved if this movement will bring the element's destination-subgroup closer to the average than it will move the source-subgroup. So if an element movement from left to right subgroup brings the right subgroup 3 closer to the average, it should NOT result in the left subgroup becoming further from the average by 3 or more. Also, obviously an element should only be moved as long as there is still at least one element left in its source subgroup. Were not done though. So when you've gone through all the subgroups and made any element movements, you should then go back through and once again get the sums of all the subgroups and find the new average. Now you start over, going through the subgroups and making movements which bring them closer to this new average. Once you are able to go through all the subgroups and find that no movements were able to be made, you are done. NOTE: This might not work, I haven't tested it, its just an idea I have Hope it helps. The solution for this problem is not greedy as you stated in your question title. It is in fact a dynamic programming one. In particular, it is the classic partition problem, related with the better known knapsack problem, which is often confused as having a greedy solution (but it doesn't). If the classic knapsack problem tries to fit in (the best) objects that have a total S, read as input. In your case, you just need to fit some objects that add up to volume T/p to obtain the first partition (where T here is the total of the numbers in your array and p is the number of partitions). This will generate a partition whose sum is as close as possible to T/p. The only solution that comes to mind to generate the rest of the partitions is to eliminate the numbers used so far and re-iterate for p-1 partitions. So, I would use something like this: while (p>=2) { partition = getSet(knapsack(v,T/p)); v.removeAll(partition); } For knapsack implementations, just use google... there's more than enough already existing implementations in all sorts of languages.
STACK_EXCHANGE
Technical Community Event26 Feb 2009 Today the Academy York office hosted an event for the Academy’s technical community. Staff with ICT interests from York were joined by a range of like minded colleagues from the Academy Subject Centres and some guest speakers from outside of the community - around thirty in total. The day began with a brief summary of points from each area. This is a regular starter for these bi-annual gatherings and there were over a dozen updates on various web site updates, new technology implementations and practices. This was then followed by a presentation from one of the guest presenters from outside of the Academy technical community - Dominic Watts from Microsoft. Dominic is the Business Manager for Higher Education and he gave an overview of SharePoint and its features more specifically Microsoft Office SharePoint Server (MOSS) and its implementation within Microsoft itself. Various customised implementations for other institutions (including a number of Higher Education ones) were also shown though I think the real insight came from the Microsoft in-house implementation as that gave a view from the side of a collaborative user rather than a relatively passive web browser. This lead nicely into a demonstration of Microsoft Office Communications Server (MOCS) by James Cummings - a member of the York ICT team. With assistance from Paul Hayward (another member of the York ICT team) an overview of the communications benefits of the system was presented live using the ICT teams (relatively) new virtualised development environment. Dominic was also able to support this presentation and identified that multiple sites running Microsoft Exchange could support a federated set-up allowing presence identifiers (i.e. do not disturb, busy, available - status) to be used outside of an organisation. This was followed by Martin Poulter of Economics presenting on some of the tools and lessons learned from his more recent forays into the use of RSS news feeds. There were a number of tools such as Yahoo Pipes and xFruits that were demonstrated to show how feeds could be filtered and aggregated and the underlying message was that we could make the feeds work for the Academy (both for the public & media as well as internally) and that agreeing on a standard (RSS 2.0) including publishing date would be a logical step in bringing things in line. A quick update from Mike Clarke (York) on an internal Academy collaboration project was followed by a chance for interpersonal networking over a spot of lunch. The afternoon sessions kicked off with a review from Sarah Heaton (York) on the project relating to EvidenceNet (formerly the Research Observatory). The focus here was very much on bringing everyone up to speed on the phased nature of the work and setting up the type of support required from the Subject Centres and how they could be engaged for the second phase in the future. The second post lunch presentation was given by staff from Jorum. The presentation was opened by Nicola Siminson (Community Enhancement Officer) and was generally an overview of the service. Funded by JISC it is a free online service for staff in UK universities and colleges that allows them to share e-learning/teaching resources. There was a significant amount of time given to explaining how Jorum is adopting a three tier access system. - Jorum Open will give clear access to resources covered by a Creative Commons licence. - Jorum Education UK will ensure that resources marked for this level are only available to UK FE & HE institutions as access is provided through a system of federated access management. - Jorum Plus is the most restricted access level and is reserved for resources that are specially licensed (e.g. JISC collections). For this level of access additional institutional authorisation is required. Matt Ramirez (Training & Support Officer) then picked up the presentation and talked through some of the latest work carried out on Jorum and specifically the Jorum Search Tool. The big change seemed to be in the speed of the system as the open source search server Apache Solr is now being used as a cache. Further developments are in the pipeline so everyone will probably be keeping an eye on this. A brief coffee break later and Martin Poulter was back to the PowerPoint this time giving an insight into his recent experiences with Wikipedia. After covering off several acronyms Martin showed how several Wikipedia pages were driving traffic to the Economics Subject Centre web site. The odd thing seemed to be that in order to produce this it was actually done through not setting up the pages himself. Rather he made ‘specific’ and open contributions as someone with a vested interest at the same time as building up his Wikipedia user profile as a considerate updater who was contributing to the content of Wikipedia in a variety of ways - particularly in managing Wikipedia vandalism. Wikipedia certainly seems like an area where knowing the territory and community is the key to your own success outside of it. It may well be something I’ll explore more deeply at a later date. John McNaught of the National Centre for Text Mining gave an interesting presentation on … text mining. By using a range of tools (/services) based on natural language processing and significant computing power, text mining analyses large quantities of textual information and attempts to find key words, phrases and even context to allow a user to focus searches, and even potentially develop new hypotheses. There could well be areas in which to exploit some of the search tools to narrow and contextualise subject/discipline searches as well as even potentially help automatically identify appropriate key words as the basis for meta tagging for repositories like Jorum and EvidenceNet. The final presentation came from Paul Harding of York’s Online Services Team. Paul gave a brief update on the current scale of the Academy’s public facing web site and a view of some of the upcoming sub-sites that were currently in development and that would be appearing in the coming weeks and months. There were also a few areas where new functionality had been introduced, mainly around RSS feeds (both incoming and outgoing), but also some integration with micro-blogging and social networking sites. Thematic searching was also something that was coming to the fore and a rating system is felt to be progressing such that it should be deployed in the not too distant future. Overall this seemed to be the most well attended and agenda packed Academy technical gathering I’ve been to, and I think everyone came away with plenty to think about.
OPCFW_CODE
Progress keeps printing after .Stop() Simple demonstration: func ExampleStoppingPrintout() { progress := uiprogress.New() progress.RefreshInterval = time.Millisecond * 10 progress.Start() bar := progress.AddBar(1) bar.Incr() time.Sleep(time.Millisecond * 15) //workaround //progress.Bars = nil progress.Stop() time.Sleep(1 * time.Second) // Output: [====================================================================] } Expected: bar once printed, as it is stopped after one interval. Actual: The bar is printed every 10 milliseconds. The cause of this issue is that the .Listen() doesn't make copy of the Progress.closeChan, and misses the moment when it is replaced by nil at .Stop(). A simple, albeit imperfect workaround is to remove the bars from the progress: progress.Bars = nil it is still not working as there will be one more printout when the .Listen() awakes after time.Sleep(p.RefreshInterval). I have the same issue. Simplest solution would be to make a copy of the stop chan and use time.Ticker instead of time.Sleep: func (p *Progress) listen(stopChan chan struct{}) { p.lw.Out = p.Out ticker := time.NewTicker(p.RefreshInterval) for { select { case <-stopChan: ticker.Stop() return case <- ticker.C: p.mtx.RLock() for _, bar := range p.Bars { fmt.Fprintln(p.lw, bar.String()) } p.lw.Flush() p.mtx.RUnlock() } } } a draft can be found at my branch where I try to fight racing conditions of the Start/Stop for the travis to be happy with my new tests. Even better solution IMHO would be to avoid swapping Progress.closeChan and instead make it not restartable. But I am not sure how much backwards breaking would such change be for the clients of the library. @mgurov, are you writing a pull request for this? I have tried your fix and here are my findings: works most of the times with multi (2) progress bars, but sometimes the progress bars are printed more than once (don't know if this is related, or an entirely new issue) and the output just after the Stop might not appear when this happens almost always works with a single progress bars, but sometimes the output doesn't appear (as in: is cleared) if I add a great delay, say, time.Sleep(300 * time.Millisecond) after the stop and before the printing to stdout, it always works. @henvic thank you for the feedback. and 3. might be related to #18 - the bar doesn't print out properly. That hints that might fix might need to be extended correspondingly. For example doing that flush on stop. Not clear to me. I guess with the current master the behavior is not much better? I won't probably be working very active on a PR before the weekend. Feel free to jump in. Regarding 2: with the current master, the behavior is worse (Flushing before stop should fix it). I don't know if I am going to be able to work on a PR, but I'll let you know if I have any news. This fixed it for me! Thanks 👍 progress.Bars = nil @revett, it should solve for 99.9% of the cases, though you will still find the issue sometimes.
GITHUB_ARCHIVE
> Cannot Verify > Cannot Verify Lock On Path No Matching Lock-token Available Cannot Verify Lock On Path No Matching Lock-token Available For me issue was same "Not working lock, " Right click--> Get locks --> check "Steal lock". File permissions, questionable, as I did not set up this server and those who did, really didn't know Subversion. I tried the "get a lock" option and got the same message for both commands. From: Tom Jones Sent: Tuesday, April 13, 2010 7:01 AM To: '[email protected]' Subject: cannot break lock due to no matching lock-token A file was created and locked. http://modskinlabs.com/cannot-verify/cannot-verify-lock-on-path-no-matching.php Without this some special characters will go with lock files, and during commit your locks will not match. " -- Johan « Return to Subversion Users | 1 view|%1 views Loading... new file > 2. When I committed I did not retain the \ locks. Now I want to remove the now empty directories and I can't because it still \ reatins a lock to I can get info on the project folders (e.g. http://stackoverflow.com/questions/4317766/no-matching-lock-token Can't Perform This Operation Without A Valid Lock Token Registered in England number 02817269Please contact the sender if you believe you have received this email inerror.**********************************************************************______________________________________________________________________This email has been scanned by the MessageLabs Email Security System.For more information please visit BOb answered Apr 14 2010 at 16:28 by Bob Archer Linedata Services (UK) Ltd Registered Office: Bishopsgate Court, 4-12 Norton Folgate, London, E1 6DB Registered in England and Wales No 3027851 add file > 3. add file >> 3. If you wish to break the lock, use the 'Check for Modification' dialog" Checking for modifications does not help because the working folder associated with this lock no longer exists either.-----Original Local file : Lock Token: opaquelocktoken:2a0cb251-8e5c-4b2e-9eb2-282abb2d67e6 Lock Owner: houdroge Lock Created: 2013-05-23 15:14:37 +0200 (jeu., 23 mai 2013) Remote file : Lock Token: opaquelocktoken:2a0cb251-8e5c-4b2e-9eb2-282abb2d67e6 Lock Owner: houdroge Lock Created: 2013-05-23 15:14:37 Could anyone provide an explanation / solution to this problem ? The URL for the repository is svn://servername/reponame. Try asking on the relevant mailinglist for your client.If you were using the command-line client, you'd use"svn unlock --force". Cannot Verify Lock On Path No Username Available modify file >> 6. The description in your reference book: http://svnbook.red-bean.com/en/1.0/ch02s02.html also presents the Copy modify merge solution, which is more commonly used. navigate to this website I'm using subversive 2.2.1 with Java Eclipse. Pardon my amateurishness, could I use a local command line client (while keeping Tortoise resident) to execute this? lock file > 5. Use svnadmin on the server to dump the repository, load it on a different machine in a temp repository and try the same operation with the temp repository. The lock that was there isnot needed. Cannot Verify Lock On Path No Username Available modify file > 6. http://grokbase.com/t/subversion/users/104dwfa5a0/cannot-break-lock-due-to-no-matching-lock-token At least according to the "Check for Modifications" dialog, even with "check repository" button pressed. –noxmetus Apr 17 '13 at 1:42 add a comment| up vote 0 down vote What command Can't Perform This Operation Without A Valid Lock Token You were using a Subversion command. Svn Release Lock I tried issuing commands using the standard DOS command prompt window. Please click on your browser's Back button, review your input, and try again. my review here Now the file does not exist in the trunk, but any changes to the trunk cannot commit because "cannot verify lock on path... ; no matching lock- token available. Working... I have tried checking formodifications but none are listed (the file was deleted).I cannot merge, delete, copy or anything. Svn Unlock Here was the session (limited to the file in question).R:\svn\TestEng>svnadmin lslocks r:\svn\TestEngPath: /Jet Pipe Servo Stands/trunk/Testsys/SOFTWARE/UTILS/Running_Average_(Shift).viUUID Token: opaquelocktoken:f97f8e59-c1d0-ad4f-afa7-18f1c75c8e75Owner: tjonesCreated: 2009-06-15 09:14:22 -0500 (Mon, 15 Jun 2009)Expires:Comment (1 line):R:\svn\TestEng>svnadmin rmlocks r:\svn\TestEng Running_Average_(Shift).viPath The URLfor the repository is svn://servername/reponame. new file >> 2. click site Now the file does not exist in the trunk, butany changes to the trunk cannot commit because "cannot verify lock onpath... ; no matching lock- token available. commit file > 4. We think there might be an incompatibility somewhere > (see > http://subversion.1072662.n5.nabble.com/Subversion-Llocked-file-could-not-able-to-commit-the-changes-td137837.html) > Seems that thread already explains what the problem is, and how you can fix it. "Yes it was lock file >> 5. Now the file does not exist in the trunk, butany changes to the trunk cannot commit because "cannot verify lock onpath... ; no matching lock- token available. It will execute as it should on your pcs. A file in the project was locked, but later removed without being unlocked. If you wish to break thelock, use the 'Check for Modification' dialog" appears and prevents anyactions on the trunk (ie, delete all files). How about file permissions? So what should I do next? –mudoot May 27 '11 at 3:32 add a comment| up vote 0 down vote if you are in eclipse , go to team, click lock, GBiz is too! Latest News Stories: Docker 1.0Heartbleed Redux: Another Gaping Wound in Web Encryption UncoveredThe Next Circle of Hell: Unpatchable SystemsGit 2.0.0 ReleasedThe Linux Foundation Announces Core Infrastructure Still no luck:CommandR:\svn\TestEng>svnadmin rmlocks r:\svn\TestEng "/Jet Pipe ServoStands/trunk/Test/sys/SOFTWARE/UTILS/Running_Average_(Shift).vi"ResponsePath '/Jet Pipe ServoStands/trunk/Test/sys/SOFTWARE/UTILS/Running_Average_(Shift).vi' isn'tlocked.Sorry... http://modskinlabs.com/cannot-verify/cannot-verify-lock-on-path-svn.php answered Apr 14 2010 at 14:38 by Tom Jones Please, don't top-post (now see below)Linedata Services (UK) Ltd Registered Office: Bishopsgate Court, 4-12 Norton Folgate, London, E1 6DB Registered in England How to interpret torque spec ranges? new file > 2. I have tried checking for modifications but none are listed (the file was deleted). The file was unlocked and the projecttagged.the tagged folder was merged with an empty trunk and then the trunkchecked out. answered Apr 14 2010 at 12:47 by Tom Jones Hi, Yes. How do I remove this lock-token or whatever I need to do?Tom JonesWoodward Governor Co.Turbine Systems (Test Engineering)**********************************************************************This email and its attachments may be confidential and are intendedsolely for the use Is there any way I can remove whatever causes the problem so I can commit. Path 'Pipe' isn't locked. commit file > > and that last commit fails with the following error : > > Cannot verify lock on path '...'; no matching lock-token available. > > > > I I am using TortoiseSVN, which directs me to this mailing list for discussion. If you have received this e-mail in error please notify the sender by return e-mail delete this e-mail and refrain from any disclosure or action based on the information.*** reply | I'm betting the issue isn't on the client side as the two links you provided suggest, but rather on the server side. –jgifford25 Dec 6 '10 at 14:42 First The lock that was there is not needed. All rights reserved. I am using the following versions : TortoiseSVN 1.7.12, Build 24070 - 64 Bit Subversion 1.7.9 Concerning the second question, the problem persists after an update of the root of working See the Red Bean book chapter on locks. Is there any way I can remove whatever causes the problem so I can commit.
OPCFW_CODE
/// <amd-dependency path="angular"> /// <amd-dependency path="angular-sanitize"> import angular = require("angular"); import Directive = require("./Directive"); import $ = require("jquery"); import QPromise = require("../shared/QPromise"); var parse; //Synchronizes assuming the function is idempotent, and the output is a QPromise, and then sets the value // of the QPromise in the parent as variableName class createQPromise extends Directive { public arguments: any[] = <any>"="; public function: Function = <any>"="; public variableName: string = <any>"="; public construct(element: angular.IAugmentedJQuery) { var unsub = () => { }; this.$watch(() => [this.arguments, this.function, this.variableName], () => { if (!this.arguments) return; if (!this.function) return; if (!this.variableName) return; var getter = parse(this.variableName); var qPromise: QPromise<any> = this.function.apply(this.$parent, this.arguments); unsub(); unsub = qPromise.subscribe(newValue => { getter.assign(this.$parent, newValue); this.$parent["safeApply"](); }); }, true); this.$on("$destroy", () => { unsub(); }); } } var mod = angular.module("createQPromise", []); mod.directive("createQPromise", function ($parse) { parse = $parse; return <any>(new createQPromise().createScope()); });
STACK_EDU
Netbeans IDE8 Glassfish 4, GlassFish Server: Administrator port is occupied by null I just downloaded Netbeans IDE7 with the Glassfish 4. I just made a project to test it out and see how it goes, and I got this error right from the start: Could not start GlassFish Server: DAS port is occupied while server is not running [location]: Deployment error: Could not start GlassFish Server: DAS port is occupied while server is not running See the server log for details. BUILD FAILED (total time: 1 second) I have reinstalled it three times, with the Glassfish and without and then later add it to Netbeans, i changed the domain.xml name="admin-listener" port="4848" to something different i did this cmd code netstat -aon | find ":80" | find "LISTENING" and closed the programm. i ran as administrator i think i did almost everyting but it wont simply run, and it keeps returning to the same error usually i would have given up but this software is required for a school project. i will try everything. i hope someone can help me. Thx in advance Have you installed Oracle 11G or something similar? You have to find the program which uses port 4848. Nope i have not installed something like also shut down al the java programma i got only visual studio Check it this is what i did to solve my problem. You have to find the process that has taken the port you need. You can try finding it by running the terminal with the command: netstat -aon | find ":80" | find "LISTENING" Find the information you need and than kill the process with specific PID in Task Manager. I hope you find this useful, Thanks. A few points: Why not download NetBeans 8 that also includes GlassFish 4? Assuming that you have successfully figured out that no other process is listening on port 4848, then Which version of the JDK are you using? Can you try JDK 7 if you are using JDK 8? Looks like you are not alone - see NetBeans bug 237477. Note that this isn't the only problem. I run on a Mac and can use the asadmin start command successfully on the remote server. IF I try to start it from NetBeans, it gives me this message. One hint might be that the domain.xml file is set so that the listening port is 9090, the properties screen for the remote server, which I entered 9090 for, tells me the HTTP port is 23043. I can't edit it. Everytime I try to create that remote server it sets it to this value. The server will run fine if I start it by hand on the remote server, but NetBeans doesn't think it is running. This occurs because I had to select domain2 because NetBeans says domain1 is already registered on my local machine. I wanted to have a local domain1 and a remote domain1 that are identical so I can test locally, and then deploy remotely. By my experience with this over win 8.1 + Netbeans 8.0 + Glassfish 4.0 The problem resides in permission of folder in windows that block the server execution I solve the problem changing the permission of the glassfish/domain/domain1 folder for xxxx/user to totalcontrol If this not solve your problem, try launch the server over console: asadmin start-domain --verbose And read the exceptions to try solve the problem. -EDIT: Reading other post to try help: like this: Glassfish server started failed in netbeans 6.9 Or check your firewall: allow >> C:\Program Files\glassfish-X.X\glassfish\modules\glassfish.jar In my case when using the command netstat -aon | find ":4848" | find "LISTENING" I noticed that one process was occupying this port. When checked what it was I noticed it was VMWare NAT controller, because I previously had configured a network adapter to listen to this port. Just stopped all VMWare related services (in my case I didn't need them for development purposes), and solved the problem. Go to Task Manager -> Services -> Stop Process whose PID IS 3136,2268,2468 ,23.... and near Range in PID. All processes near to the web server's PID. It works for me on Windows 8.1 pro & Windows 7. You may want to not use caps lock when typing. I had the same error message. Turned out it was caused because my firewall blocked port 4848 May be late but I solved this issue by deleting the app server from the Netbeans and by adding it again. In my case Netbeans 8.2 and Payara 4.1 instead of Glassfish. If you changed the host of Glassfish server then set it to localhost it should work.
STACK_EXCHANGE
M: The C Conference - zdw http://cconf.github.com/ R: memset One technology development which makes this increasingly relevant are the prevalence of Arduino (Raspberry Pi, etc) - embedded platforms that take care of a lot of the hardware heavy lifting for you. But there is a gap between what the Arduino libraries let you do and the capabilities of the hardware itself. And it is for these things, projects of increasing sophistication, which require some C and assembly, for which there is value in wider dissemination of expertise. Incidentally, I (a few months ago) created an "NYC C and Assembly Enthusiasts" meetup group, which seems to be in a similar vein. I have not yet hosted any meetups, but since C Conference Enthusiasts seem to be on this thread, please send me a note! R: saurik You should not use the signature design of someone else's book if you have no affiliation with them: come up with your own branding. R: tptacek Meh. I thought it was clever. K&R is a little like the bible; nobody's going to think this guy was involved with it. R: saurik _sigh_ I did... :( Now, thanks to the over-the-top "nobody's going to think that" comment, we get to have an argument over whether I'm stupid instead of looking at similar situations (such as the many people who do that to my brand, and the real confusion I can demonstrate among both my user base and the people I network with, many of whom are not so technical as you, I, or even my users), the problems with accepting that specific excuse at face value (as I have found it equally used by people running real money-related scams in addition to people with good intentions), the potential for damage to the brand for the people we admire who built it (Kernighan is still alive, and it seems pretty evident that he is paying homage to this with the cover of his new book, D is for Digital), or any of the interesting moral questions behind the sentiment (for a humorous example of where this can go, see the episode "I'm Not That Guy" of How I Met Your Mother). R: tptacek Uh, what? I don't think you're stupid. R: saurik I am not certain how to reconcile this comment of yours with your earlier flat dismissal. R: derleth 'Stupid' and 'ignorant' or 'naïve' are not the same thing. R: tptacek Uh... what? I don't think he's ignorant or naive. Jiminy. You can disagree with an idea casually around here, right? R: angersock no we are very serious business R: silentbicycle Embedded development is another important niche for C. (There are already many embedded conferences, of course.) A track on testing C projects would be good. For example, check out Unity (<http://throwtheswitch.org/white-papers/unity-intro.html>), CMock (<http://throwtheswitch.org/white-papers/cmock-intro.html>), and greatest (<https://github.com/silentbicycle/greatest>). (disclosure: I'm involved with all three.) R: alain94040 Pick a date and a location. Contact the venue and make a tentative booking. Then, only then, spread the word publicly. Right now, no one can help because no date or format is set. Here's a tip: use cvent.com to post a RFP (request for proposal). It's a great way to contat tens of venues/hotels with only a few clicks, and never have to talk to anyone on the phone to get pricing for various locations, rooms and sizes. A geek's paradise :-) Or you can do like I did for the Startup Conference: start small (the first conference was officially half a day but I crammed 6 hours of content, crazy). See that people loved it, and grow from there. You'll learn as you go. R: mvzink I think your suggestions are really useful, but on the other hand, I think for something as broad as C, an initial test with a measuring stick is required before figuring out the specifics. I mean, with C, you could go in such diverse directions as microcontroller programming all the way to desktop application development. It may be better to figure out how to reconcile C's broad applicability before booking a venue. R: cperciva So far this seems more like "the vague idea that a C conference might occur somewhere at some point if someone wants to pay for it and someone wants to organize it" than "The C Conference". Can we have more action and less wishful thinking? R: philips Woah, this hit Hacker News and Twitter before I expected. I started contacting people today to start organizing this and created the site to have a point of discussion. If you want to discuss ideas, sponsorship or venue stuff email me directly or post to the Google Group. If you want to discuss in person stop by 231 27th St. SF, CA. R: cperciva Discussion is good, but it needs to start somewhere. All you've got -- or at least, all you've published -- is a vague list of ideas, most of which would _individually_ make a big conference. A conference is defined as much by what it _isn't_ as by what it _is_. Start by narrowing your scope a bit; a conference which is about everything relating to the single most widely used language in the history of computer programming isn't even remotely feasible. R: jfarmer Chill, man. It's clear he posted this as a "Here's a good idea, let's start to flesh it out" sort of page, and it got exposure before it had even been fleshed out. I'm sure he's looking for people who are really excited about making it happen to help him do just that. I think this is a fine way to go about it. Two weeks from now nobody will remember that this took off on GitHub, HN, Twitter, etc. If you like the idea, or even just the potential of the idea, why don't you offer up useful suggestions instead of being uselessly critical? What would you want to see? Since it needs more focus, where would you focus it? If you were giving a talk, what would you give? Do you know anyone who would be excited about helping to organize it? If I were Brandon, I'd be ecstatic that this took root so quickly and people are already responding emotionally -- even if negatively. R: cperciva _why don't you offer up useful suggestions instead of being uselessly critical?_ I thought "you need to narrow your scope" was a useful suggestion. I guess you think differently. R: zedshaw No, a suggestion would be: "You need to narrow your scope. I personally would be more interested in X and Y, but not Z, J, or N." What you did was give a vague criticism without offering a concrete solution in response. Those kinds of criticisms are always difficult to respond to primarily because, should he follow your suggestion and it fail, you can simply say: "Well I just said narrow your scope, I never said narrow it to those failed topics." R: cperciva I'm not so egotistical as to think that the topics which interest me would be the ones which interest lots of other people. In fact, I have ample evidence to the contrary. R: jfarmer Nobody is saying that's the case, but surely you see the (qualitative) difference between: "Why did you even do this? It's really unfocused. This needs to be more focused if you ever hope in seeing something happen. We need to see more action and less wishful thinking." and "Interesting! This has a lot of potential. I'd love to see topics X, Y, and Z, although that's just me. Send me an email and I can introduce you to some people who might be interested in helping to organize this." R: hsmyers Didn't know about CCAN <http://ccodearchive.net/> I plan on investigating. Am home bound so attending the conference is not possible as much as I would like to. Am hoping for videos and similar post conference so I can get what I can :) R: dhconnelly I would absolutely pay for and go to this. Such amazing potential for keynote speakers. R: BrianLy Do people really go to programming conferences for keynotes? My experience has been that the hallway conversations are alone worth more than keynotes. Many keynotes sessions tend to be disappointing because there is not enough time to dive into technical details, or the presenter is not sure if they should be focusing on the movement as opposed to their technical contributions. R: cperciva I go to conferences I've attended before because of the hallway track. But that won't get me to attend a _new_ conference, because I don't know if the hallway track will be any good yet. R: pjmlp It might turn out to be interesting. So far I was only aware of such conferences by ACCU. R: pavlov Every time I use Grand Central Dispatch on Mac OS X, I get the feeling that "C99 + Blocks" is actually a pretty great language for many things, and I'd love to use it on other platforms. One of these days, I need to look into the state of Clang on Windows... If you haven't used blocks, Wikipedia has a concise article with a readable code sample, although it doesn't really give a very good idea of what real closures in C are good for: <http://en.wikipedia.org/wiki/Blocks_(C_language_extension)> R: leon_ uh, looks like some webdev hippster read the 2nd ed. and now wants to start a C-rockstar/ninja cult. R: yxhuvud Would that be a bad thing?
HACKER_NEWS
Pass parameter values to table? I have following stored procedure query and now I need pass some parameter values to the 'details' table columns like "cus_name" and "cus_tel" inside this query in SQL Server. SELECT APO_Order_Id AS Purchase_Order_Number, p.ref_code AS Project_ID, p.PJM_UserDefinedProjID AS Project_Code, CASE m.APO_Use_Alt_Address WHEN 0 THEN LTRIM(isnull(p.PJM_LotNumber, '')+' '+isnull(p.PJM_StreetNumber, '')+' '+isnull(p.PJM_SiteAddr1, '')+' '+isnull(p.PJM_Suburb, '')+' '+isnull(p.PJM_PostCode, '')) WHEN 1 THEN LTRIM(isnull(p.PJM_Alt_LotNumber, '')+' '+isnull(p.PJM_Alt_StreetNumber, '')+' '+isnull(p.PJM_Alt_SiteAddr1, '')+' '+isnull(p.PJM_Alt_Suburb, '')+' '+isnull(p.PJM_Alt_PostCode, '')) END AS Project_Site_Address, p.PJM_StartDate AS Site_StartDate, s.CNT_ClientName AS Creditor, m.APO_Description AS Purchase_Order_Description, isnull(d.Order_Amount, 0) AS Order_Total, ui.tui_username AS Project_Manager FROM Account_APOrderMaster AS m LEFT JOIN ( SELECT APOD_Master_Id, APOD_Project_Id, SUM(APOD_Total_Amount) AS Order_Amount FROM Account_APOrderDetail GROUP BY APOD_Master_Id, APOD_Project_Id ) AS d ON m.Ref_Code = d.APOD_Master_Id LEFT JOIN Client_Name AS s ON m.APO_Supplier_ID = s.Ref_Code LEFT JOIN Project_Master AS p ON d.APOD_Project_Id = p.ref_code LEFT JOIN System_UserInformation AS ui ON m.APO_ProjectManagerId = ui.Ref_Code WHERE apo_order_id = @purchase_order_number GO How can I do this? completed Stored Procedure Codes SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[spGetPurchaseOrderProjectDetails] @purchase_order_number as varchar(20) AS SET NOCOUNT ON SELECT APO_Order_Id AS Purchase_Order_Number, p.ref_code AS Project_ID, p.PJM_UserDefinedProjID AS Project_Code, CASE m.APO_Use_Alt_Address WHEN 0 THEN LTRIM(isnull(p.PJM_LotNumber, '')+' '+isnull(p.PJM_StreetNumber, '')+' '+isnull(p.PJM_SiteAddr1, '')+' '+isnull(p.PJM_Suburb, '')+' '+isnull(p.PJM_PostCode, '')) WHEN 1 THEN LTRIM(isnull(p.PJM_Alt_LotNumber, '')+' '+isnull(p.PJM_Alt_StreetNumber, '')+' '+isnull(p.PJM_Alt_SiteAddr1, '')+' '+isnull(p.PJM_Alt_Suburb, '')+' '+isnull(p.PJM_Alt_PostCode, '')) END AS Project_Site_Address, p.PJM_StartDate AS Site_StartDate, s.CNT_ClientName AS Creditor, m.APO_Description AS Purchase_Order_Description, isnull(d.Order_Amount, 0) AS Order_Total, ui.tui_username AS Project_Manager FROM Account_APOrderMaster AS m LEFT JOIN ( SELECT APOD_Master_Id, APOD_Project_Id, SUM(APOD_Total_Amount) AS Order_Amount FROM Account_APOrderDetail GROUP BY APOD_Master_Id, APOD_Project_Id ) AS d ON m.Ref_Code = d.APOD_Master_Id LEFT JOIN Client_Name AS s ON m.APO_Supplier_ID = s.Ref_Code LEFT JOIN Project_Master AS p ON d.APOD_Project_Id = p.ref_code LEFT JOIN System_UserInformation AS ui ON m.APO_ProjectManagerId = ui.Ref_Code WHERE apo_order_id = @purchase_order_number GO Unable to undestand! Do you want the parameters for the where clause to filter the data? yes of course. I need insert new values as parameters in the details table @purchase_order_number where it comes from? that is from Account_APOrderMaster table This is also a parameter? Then same you have add new params like this for the Stored procedure inputs please show the complete stored procedure codes actually above stored procedure is used to grab some values from ''Account_APOrderMaster'' table and now I need pass that data and new parameter data to ''details'' table. please see above complete codes what do you mean by pass some parameter values to the 'details' table columns like "cus_name" and "cus_tel" inside this query ? I mean I need insert new data set to ''details'' table "cus_name" and "cus_tel" using above stored procedure.
STACK_EXCHANGE
Development and application of a watershed system model for the Heihe River basin The model integration research team for the Heihe River basin recently published an article titled "novel hybrid coupling of ecology and society at river basin scale: a watered system model for the Heihe River Basin" in the journal of Environmental Modeling & Software. Watersheds are the basic unit of Earth's land-surface system and, in most cases, are characterized by complex natural-human interactions. Numerical modeling is one of the fundamental methodologies used to study watershed-scale Earth systems. In order to more accurately portray the reciprocal feedbacks and synergistic evolution mechanisms among ecological, hydrological, and socio-economic subsystems in watersheds, and to support integrated water resources management and sustainable development of watersheds, it is necessary to conduct research on watershed system models and develop decision support systems for integrated water resources management in watersheds with watershed system models as the backbone. The work was conducted under the major research plan of the National Natural Science Foundation of China (NSFC), the integrated research on the ecohydrological process of the Heihe River basin (2010-2019). The model integration research for the Heihe River basin has undergone a transition from the improvement of specific ecological and hydrological processes to the comprehensive development of a new basin system model that can reflect the characteristics of inland rivers. After years of efforts, the integrated model of the Heihe River basin system was completed. The model is ahead of existing models in terms of completeness of functions, performance, simulation and prediction capabilities, and application of remote sensing data. The watershed system model for the Heihe River basin consists of four main modules, i.e., a geomorphology-based ecohydrological model for the upstream area GBEHM, a hydrological-ecological integrated watershed-scale model for the downstream area HEIFLOW, a socioeconomic model WEM, and a microbehavior-based model ABM, and two interface models connecting the ecohydrological model and the economic systems, i.e., the land use and water resource models. The model has been successfully applied in fine closure of multi-scale water balance in the basin, water use efficiency and water productivity analysis, medium and long-term ecohydrological simulation and prediction, ecological response study of key water management measures, and construction of decision support system for sustainable development of the basin. Fig 1. Framework of the watershed system model. The natural system is represented by the ecohydrological model; and the socioeconomic systems are represented by the economic system model and the agent-based model (ABM). Land-use and water resource models are interface models between natural and economic systems. The exchange variables between the different components of the watershed system model are illustrated along the arrows that signify the directions of the coupling variables. Li X*, Zhang L*, Zheng Y, Yang D, Wu F, Tian Y, Han F, Gao B, Li H, Zhang Y, Ge Y, Cheng G, Fu B, Xia J, Song C, Zheng C. Novel hybrid coupling of ecohydrology and socioeconomy at river basin scale: A watershed system model for the Heihe River basin. Environmental Modelling & Software, 2021, 141:105058.
OPCFW_CODE
||It has been suggested that this article be merged with Arithmetic overflow. (Discuss) Proposed since September 2015.| In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is too large to be represented within the available storage space. For instance, taking the arithmetic mean of two numbers by adding them and dividing by two, as done in many search algorithms, causes error if the sum (although not the resulting mean) is too large to be represented, and hence overflows. The most common result of an overflow is that the least significant representable bits of the result are stored; the result is said to wrap. On some processors like GPUs and DSPs, the result saturates; that is, once the maximum value is reached, any attempt to increase it always returns the maximum integer value. - 8 bits: maximum representable value 28 − 1 = 255 - 16 bits: maximum representable value 216 − 1 = 65,535 - 32 bits: maximum representable value 232 − 1 = 4,294,967,295 (the most common width for personal computers as of 2005[update]), - 64 bits: maximum representable value 264 − 1 = 18,446,744,073,709,551,615 (the most common width for personal computers, but not necessarily their operating systems, as of 2015[update]), - 128 bits: maximum representable value 2128 − 1 = 340,282,366,920,938,463,463,374,607,431,768,211,455 Since an arithmetic operation may produce a result larger than the maximum representable value, a potential error condition may result. In the C programming language, signed integer overflow causes undefined behavior, while unsigned integer overflow causes the number to be reduced modulo a power of two, meaning that unsigned integers "wrap around" on overflow. This "wrap around" is the cause of the famous "Split Screen" in Pac-Man. A "wrap around" corresponds to the fact, that e.g. if the addition of two positive integers produces an overflow, it may result in an unexpected result. For example with unsigned 32 bit integers, 4000000000u + 1000000000u = 705032704u. In computer graphics or signal processing, it is typical to work on data that ranges from 0 to 1 or from −1 to 1. An example of this is a grayscale image where 0 represents black, 1 represents white, and values in-between represent varying shades of gray. One operation that one may want to support is brightening the image by multiplying every pixel by a constant. Saturated arithmetic allows one to just blindly multiply every pixel by that constant without worrying about overflow by just sticking to a reasonable outcome that all these pixels larger than 1 (i.e. "brighter than white") just become white and all values "darker than black" just become black. |Language||Unsigned integer||Signed integer| |C||modulo power of two||undefined behavior| |C++||modulo power of two||undefined behavior| |C#||modulo power of 2 in unchecked context; |Python 2||NA||convert to long| |Scheme||NA||convert to bigNum| |Smalltalk||NA||convert to LargeInteger| |Swift||Causes error unless using special overflow operators. | In some situations, a program may make the assumption that a variable always contains a positive value. If the variable has a signed integer type, an overflow can cause its value to wrap and become negative. This overflow violates the program's assumption and may lead to unintended behavior. Similarly, subtracting from a small unsigned value may cause it to wrap to a large positive value which may also be an unexpected behavior. Multiplying or adding two integers may result in a value that is non-negative, but unexpectedly small. If this number is used as the number of bytes to allocate for a buffer, the buffer will be allocated unexpectedly small, leading to a potential buffer overflow. Techniques for mitigating integer overflow problems Programming languages implement various mitigation techniques against an accidental overflow: Ada, Seed7 (and certain variants of functional languages), trigger an exception condition on overflow, while Python (since 2.4) seamlessly converts internal representation of the number to match its growth, eventually representing it as long whose capability is only limited by the available memory. Run-time overflow detection implementation AddressSanitizer is also available for C compilers. List of techniques and methods that might be used to mitigate the consequences of integer overflow: - The effects of integer-based attacks for C/C++ and how to defend against them by using subtyping in Efficient and Accurate Detection of Integer-based Attacks. - CERT As-if Infinitely Ranged (AIR) integer model - a largely automated mechanism for eliminating integer overflow and integer truncation As-if Infinitely Ranged Integer Model In languages with native support for Arbitrary-precision arithmetic and type safety (such as Python or Common Lisp), numbers are promoted to a larger size automatically when overflows occur, or exceptions thrown (conditions signaled) when a range constraint exists. Using such languages may thus be helpful to mitigate this issue. In some such languages, situations are however still possible where an integer overflow could occur. An example is explicit optimization of a code path which is considered a bottleneck by the profiler. In the case of Common Lisp, this is possible by using an explicit declaration to type-annotate a variable to a machine-size word (fixnum) and lower the type safety level to zero for a particular code block. In Java 8, there are overloaded methods, for example like Math#addExact(), which will throw ArithmeticException in case of overflow. On 30 April 2015, the Federal Aviation Authority announced it will order Boeing 787 operators to reset its electrical system periodically, to avoid an integer overflow which could lead to loss of electrical power and ram air turbine deployment, and Boeing is going to deploy a software update in the fourth quarter. The European Aviation Safety Agency followed on 4 May 2015. The error happens after 2³¹ centiseconds (248.55134814815 days), indicating a 32 bit signed integer. It is impossible to progress past level 22 of the arcade game Donkey Kong because of an integer overflow in its time/bonus. Donkey Kong takes the level number you're on, multiplies it by 10 and adds 40. When you reach level 22 the time/bonus number is 260 which is too large for its 8-bit 256 value register so it resets itself to 0 and gives the remaining 4 as the time/bonus - not long enough to complete the level. In Donkey Kong Jr. Math, when you try to calculate a number over 10000, it only shows the first 4 digits. - Arithmetic underflow - Arithmetic overflow - Buffer overflow - Heap overflow - Stack buffer overflow - Pointer swizzling - Software testing - Static code analysis - Google Research blog: Nearly All Binary Searches and Mergesorts are Broken, Joshua Bloch, 2 June 2006 - Pittman, Jamey. "The Pac-Man Dossier". - Seed7 manual, section 15.2.3 OVERFLOW_ERROR. - The Swift Programming Language. Swift 2.1 Edition. October 21, 2015. - Python documentation, section 5.1 Arithmetic conversions. - Reddy, Abhishek (2008-08-22). "Features of Common Lisp". - Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. ISBN 0-262-16209-1. - Wright, Andrew K.; Matthias Felleisen (1994). "A Syntactic Approach to Type Soundness". Information and Computation. 115 (1): 38–94. doi:10.1006/inco.1994.1093. - Macrakis, Stavros (April 1982). "Safety and power" (requires subscription). ACM SIGSOFT Software Engineering Notes. 7 (2): 25–26. doi:10.1145/1005937.1005941. - "F.A.A. Orders Fix for Possible Power Loss in Boeing 787". New York Times. 30 April 2015. - "US-2015-09-07 : Electrical Power - Deactivation". Airworthiness Directives. European Aviation Safety Agency. 4 May 2015.
OPCFW_CODE
In my previous article about how to verify an email address in ASP.NET MVC on Windows Azure, I covered changing your model and database with entity framework code first migrations. This leaves several more steps to complete: 1. Amend the model to allow for storing of emails and verification information. 2. Apply these model changes to the database. 3. Create a SendGrid account on azure and configure it within your web app. 4. Collect the additional information on user sign-up 5. Amend the registration process to send relevant emails and prevent automatic sign-in 6. Change the login process to ensure the user is verified before logging in 7. Handle the verification link for the specified user In this series of posts I’ll be covering each of these steps as part of my ASP.NET Web Apps on Azure series. Series: ASP.NET Web Apps on Windows Azure - Part 1 - Creating and Deploying an ASP.NET MVC Web App to Windows Azure - Part 2 - Connecting an ASP.NET MVC Web App with SQL Azure - Part 3 - Connecting SQL Management Studio to SQL Azure - Part 4a - How To…Verify Email Address: Entity Framework Code First Migrations - Part 4b - How To…Verify Email Address: Setting Up SendGrid on Windows Azure Sending Email from Windows Azure One thing that often catches developers (including myself) out when starting to develop ina cloud infrastructure like Windows Azure, is the seeming lack of certain services…such as an SMTP server. I think this catches us out because we’re used to working on virtual machines with hosting companies, or on premise within a larger network. In both of these situations its common to have an SMTP server readily available. Windows Azure is web sites and web roles aren’t general purpose virtual machines – they’re web servers designed to serve web site – as such they don’t have these often unnecessary features such as an smtp server. This leaves you with a few options for sending email: - Some form of on-premise email forwarding solution. - Use a general purpose third party SMTP service. - Run an SMTP server of your own inside Azure. - Send your emails via an Office365 account. - Use an azure integrated SMTP service (i.e. SendGrid). For this article I’ve chosen to go the SendGrid route for the following reasons: - Generally speaking when making apps we’re probably not going to have an on-premise environment. We want everything in the cloud with as little maintenance expense as possible. - I don’t really like backing my highly available windows azure environment with a general purpose 3rd party SMTP service. In my experience the management of these services can become painful, and the availability of them is not of the same standard as azure. Furthermore emails sent from these general purpose smtp servers get marked as spam way too often. - I’d rarely advise anybody trying to set-up something like an smtp server within azure – its complicated and fraught with issues. I’m sure some have successfully achieved this, but my worry is that you’d spend so much time setting it up and maintaining it that it may take over things. - Sending via Office365 is a great option, but as for option one, you may well not be using office 365. - SendGrid is really easy to set-up, its manageable from the azure management portal and offers a lot before you have to start paying for things. In future articles I may cover some of these other options, such as using Office365 or a 3rd party provider, but right now we’re going to concentrate on SendGrid. What is SendGrid From the SendGrid documentation: SendGrid is a cloud-based SMTP provider that allows you to send email without having to maintain email servers. SendGrid manages all of the technical details, from scaling the infrastructure to ISP outreach and reputation monitoring to whitelist services and real time analytics. So SendGrid is all about making email easy to send from the cloud – in our case Windows Azure. Getting Started With SendGrid To get started you need to add SendGrid in the azure management portal: - Go to the Azure Management Portal - Click the ‘New’ button in the bottom left corner - Choose the ‘Store’ options: - This should open a pop-up where you need to find the SendGrid option. Select it, and press the next button: - On this choose your pricing plane, what you want to call the sendgrid service within azure and where you want to locate it. Then click next. (I selected the free plan, called it MyFirstWebAppSendGrid and EastUS as the location): - Finally select the purchase option (note that I chose the free plan and as such have no costs associated at this stage): - Finally you should have to wait a few seconds, but you should see your SendGrid account is successfully configured within the windows azure management portal. Using SendGrid Within Your Web App Once all of this is configured, using SendGrid within you app is really straight forward. Get SendGrid SMTP Connection Details - Click on the provisioned SendGrid service within the azure management portal - Next on the footer click ‘Connection Info’ - You should next be presented with all the connection info for using the SendGrid SMTP service. Make a note of the username, password and server Configuring Your Web App Configuraing the SMTP server in your web app is really straight forward. All you need to do is open your web.config file and add the following section inside the configuration element: <network host="server" password="password" userName="username" port="587" /> Here you need to replace the server, username and password with the ones you got from the previous step. The from address should also be changed to something more relevant. Sending an email At this stage you have are ready to send emails from your web app using the standard .NET SMTP provider. Here is some sample code that demonstrates how you would send an email using the standard provider: // Create a mail message var message = new MailMessage(); // Add recipient message.To.Add(new MailAddress("firstname.lastname@example.org", "Matt Whetton")); // Set subject message.Subject = "Welcome to Codenutz"; // Set HTML body message.Body = "<p>Hello and welcome</p>"; message.IsBodyHtml = true; // Create the smtp client var smtpClient = new SmtpClient(); // Send the message There are a lot of variations that can be added to this such as: - Specifying plain text version of email - Specifying from address - Adding attachments etc But for now we have everything configured ready for sending our email verification messages. Next we’ll be looking at amending the user sign-up process to collect the additional data we require, send and handle the verification emails. Please get in contact if you have anything you’d like to ask, any more detail, any feedback or just want to say that you found something useful. I’m always welcoming of feedback and love to hear that you’re finding these articles useful.
OPCFW_CODE
I'm testing ece 2.0.1 on aws. My deployment is a small one with three ece node with full roles. One ece node was terminated. Ece should have high availability even one node is down. But the reality is I cannot get anything from cloud ui. It just told me that "The requested cluster is currently unavailable" like below. It should indeed be HA, that's a configuration that gets used/test a lot As you implied, the problem is that the proxy is not routing to the adminconsole cluster for some reason. I think you answered this implicitly, but could you just confirm that you had expanded the adminconsole cluster to 3 zones? (With all 3 data zones, or 2 data zones a tie-breaker?) When you can access the cluster directly (did i understand that right, you got the port from docker and hit that directly?) but not via the proxy, it normally means one of two things: The cluster can't elect a single master, eg quorum size is too high - can you GET _cluster/settings and GET _cat/nodes to see if that's the case via the direct connection So ... the third reason for the issues you describe is indeed that the proxy table is not being built correctly, as you suggested - I didn't mention it before because it's pretty rare, normally involving the persistence store (Zookeeper) being in an unhealthy state (eg out of ZK quorum) Based on the log message, the proxy doesn't believe the cluster has ES quorum, which likely means that it has the wrong master (because it's reading from a stale ZK store). Zookeeper being unhealthy in some way could also cause slowness from some API requests/UI pages (ie the ones that aren't cached) Do the logs for frc-zookeepers-zookeeper show anything interesting? Yeah at a quick glance it looks like ZK12 couldn't connect to ZK11 so lost quorum when ZK10 went down. ZK has occasionally been flakey when people have used odd cluster sizes (like 4) but I've never seen it go down when 1 node out of 3 has been lost When I run into this sort of thing, normally I hand edit the config files (in increasing Id order) to bring the zk cluster to 1 node, get it running in standalone mode, etc - then add the second node (I have notes I can dig out tomorrow). Other people have told me that restarting both frc-directors-director and frc-zookeepers-zookeeper (zk first I think) has worked for them - that's probably an easier place to start! Issue is fixed after restart director and zookeeper service. But if this happens in a production environment, this should not be a good solution . right? So as I understand, this problem is caused by zookeeper. When one zookeeper participant leaves suddenly, the other participants cannot make a healthy zookeeper ensemble again. This brings dirty data which is not the same with the actual cluster state. So proxy think that the es cluster has no quorum because proxy just check cluster state from zookeeper store. As I mentioned in a previous post, in my experience what happened to you is pretty rare - I typically know about / help fix any ZK outages in ~100 or so ensembles we own or manage over the last 2 years ... I think there have been 5-6 similar issues and maybe 1 of them wasn't attributable to users manually going out of quorum, or at least some unusual ZK config (eg even quorum size, overloaded ZK, etc) In your case, based on your description/the logs I can't see any possible configuration issues that can explain what happened. So it seems to fall into the non-ideal "ZK flakiness" category, which empirically has been decent, and of course ZK is a fairly common platform in production environments (though of course you're at 1/1 failures, so have cause to doubt that!) For my interest, could you also share the logs on the other good zone ("id 11" in the ZK logs)?
OPCFW_CODE
This morning a friend pointed out this story about how QuickBooks maker Intuit manages 10 million lines of code. The punch line is that they manage 10 million lines of code just like you should be managing your code. Is your business using professional-grade methods and tools? Are you sure? Intuit manages their massive code base using the same professional-grade methods that almost every software business should be using. Perhaps you’d choose different tools, but the process is the key. What does Intuit process include? Intuit uses continuous integration. So can you. Intuit’s continuous integration (CI) tool is Jenkins, an open-source product not unlike CruiseControl and CruiseControl.Net and numerous others. I use CruiseControl.Net. Use what fits you until it doesn’t. “But my programmers will never agree to that”, you say. Aside from wondering who runs the place, I suggest you review this discussion on getting developers bought in to continuous integration. You shouldn’t have to work very hard at it, if you’re working with professionals. Intuit uses source control. So can you. Intuit’s source control tool is Perforce, which offers a free version. If you want something simpler or less expensive, there are plenty of options – including some very dependable free source management systems. Examples include Git, Mercurial, Kiln (a hosted version of Mercurial), Vault, Subversion and several others. I use Kiln and Vault. Intuit manages multiple builds. So can you. You can do that with a source control tool in conjunction with your CI tool of choice. You could make this more complex, but really, it’s about builds and source management. And you *can* do that. Why do you need multiple builds? For one, when your tools change. You have production code deployed. It breaks. You need to fix it, and you sure can’t wait until all of your testing is done on the new tool set. Check out the code with the old tool set, fix it, check it in, build it. You won’t believe how simple this is, especially if you manage multiple toolset releases with source control. Your hair might even grow back. Intuit automates code analysis and testing. So can you. They use Coverity in conjunction with their own in-house tools, but you can start today with FxCop, NDepend, Simian, Gendarme, nAnt, various CI tools, Test Complete and a host of other CI-enabled code analysis and test tools. You can use VMWare‘s Workstation for Windows or Fusion for Mac (or both, as I do) to manage the OS snapshots and provide the same consistent testbench for each set of tests without manually having to build a test system, run tests, restore and so on. Avoid the drudgery just like Intuit does, without losing the benefits of greater and more consistent quality. Stop waiting until you’re “big enough” If you’re waiting until you’re “big enough”, you’re not only wasting time, but you’re slowing down your ability to get big in the first place. You can’t wait until you have 10 million lines of code to manage to decide to go pro. By that time, you’re either drowning in code and tests and builds or you’re history. Or maybe you’re surviving as a slave to your software. For every hour that you spend manually building binaries, building installs, testing installs, testing your app and doing other grunt work that your competition uses CI and source control systems to manage, guess what your competition is doing? They’re spending their time coding, marketing, working with customers, planning strategy, sleeping and enjoying their families. The earlier you incorporate professional methods and professional tools in your software business, the earlier you get out of “dig a hole and fill it up” mode. One of the reasons you might not be doing as well as you’d like is that you’re still using the methods and tools that a little software business uses. Go pro. Start today.
OPCFW_CODE
About Complex Sampl.R In research and in Monitoring & Evaluation, sampling is a fact of life. While a simple random sample (SRS) is the 'gold standard' approach, it is often not practical. A multi-stage cluster sampling design with probability of selection proportional to size (PPS) can be used when compling a complete sampling frame is cost-prohibitive or impossible. This approach helps investigators conserve resources otherwise spent in fuel and time. This app is designed select clusters for a two-stage Probability Proportional to Size (PPS) cluster sampling design. Only a .csv file (xlsx is not yet supported) with at least one column labeled 'population' is required. This column should contain the population or number of Ultimate Sampling Units (USUs) within each Primary Sampling Unit (PSU). If the number of USUs in a given PSU is smaller than the cluster size entered, the app will automatically sample from the next row(s) until there are sufficient USUs for a cluster. - First, the code currently only samples subsequent rows in the dataset when the population of a PSU is smaller than the cluster sample size. Therefore, if a PSU is selected near the bottom of the .csv file the loop may stop prematurely. The result of this limitation would be a cluster with more USUs sampled than it actually contains. - The other limitation is that the loop is currently calculated only by counting the sum of the population from a group of PSU's until there are as many or more USUs available than the cluster size. In rare cases, this may mean that more USU's may be sampled from a PSU than it contains. For example, suppose the desired cluster size was 100. A selected PSU (Community A) contained 80 USUs and the following PSU (Community B) contained 30. The loop would recognize that between Community A and Community B, there are more than 100 USUs. It would then cheerfully divide 100 USUs by 2 PSUs, assigning half (50) of the cluster to be taken from Community A (with a 80 USUs) and 50 to be taken from Community B (thought it only contains 30). Both of these issues should be infrequent, but will be addressed in future versions. Find the source code for this app under a GNU license at https://github.com/jwilliamrozelle/figuredio. - (Optional) At this time, if you wish to stratify your sample, each strata can be uploaded as a separate csv. You may then follow steps 2-6 for each strata. - Upload a csv file using the 'Browse...' button. Your csv file must (at least) contain a column called 'population', containing the number of USUs in that PSU. Other columns may be included as desired. If you wish, you may use the Example data (downloadable below) to test the app, or as a template. - Input the number of PSUs in your sample, and the number of USUs in each PSU - Click 'Generate Random Start'. - (Optional) After generating a random start, you may select your own random start. This may be useful when trying to reproduce a previous selection. Download the generated sample in xlsx format by clicking 'Download Sample Result'. The two sheets are: - Sample: Containing all uploaded info with additional columns detailing the selection - Sample Info: Containing documentation of your inputs, the sample size and sampling interval
OPCFW_CODE
this.$ returns undefined in ready callback Description this.$ resolves to undefined in the ready lifecycle callback in a component in Chrome. After testing out many things the problem only happens in one component that is part of iron-pages. It happens after the browser directly reloads on that page, but it is not there when the page is accessed from another page. The app works in firefox as expected. The code is generated from polymer-cli, application template. <app-location route="{{route}}"></app-location> <app-route route="{{route}}" pattern="/:page" data="{{routeData}}" tail="{{subroute}}"></app-route> <app-route route="{{subroute}}" pattern="/:id" data="{{subrouteData}}"> </app-route> <iron-pages selected="[[page]]" attr-for-selected="name" fallback-selection="view404" role="main"> <my-page name="test-plan-page" route="{{route}}" plan-id="{{subrouteData.id}}" id-test-case-path="{{idTestCasePath}}"></my-page> .... </iron-pages> After removing all of the code from the component and just leaving the imports the bug is still there. Only after leaving three imports and if one of the imports is not vaadin-grid the page works. Even if I remove vaadin-grid and leave the rest of the imports the bug is still there and this.$ remains undefined. This is really weird. (for some reason html tags are removed from markup so I have removed the opening symbol) `link rel="import" href="../../bower_components/polymer/polymer.html"> link rel="import" href="../../bower_components/paper-fab/paper-fab.html"> link rel="import" href="../../bower_components/paper-input/paper-input.html"> link rel="import" href="../../bower_components/paper-input/paper-textarea.html"> link rel="import" href="../../bower_components/vaadin-grid/vaadin-grid.html"> link rel="import" href="../../bower_components/paper-menu-button/paper-menu-button.html"> link rel="import" href="../../bower_components/paper-item/paper-item.html"> link rel="import" href="../shared-styles.html"> dom-module id="my-page"> template> Lorem ipsum /template> <script> Polymer({ is: 'my-page', ready: function () { console.log(this.$); } }); </script> /dom-module> All of my other page components work and most have vaadin-grid working flawlessly. Live Demo Unable to give you the demo as the script is hosted locally and I'm unable to reproduce it separately. Steps to Reproduce Using chrome My app is generated using polymer-cli my-page is located as a page in iron-pages and has a ready callback Upon refreshing the browser with that page selected this.$ resolves to undefined If I navigate to the page from another page it works as expected. Note: Once website is loaded from a different page everything is normal. Expected Results It should return an object with all DOM references by id. Actual Results Returns undefined Browsers Affected [x ] Chrome [ ] Firefox - Works as expected [ ] Edge [ ] Safari 9 [ ] Safari 8 [ ] IE 11 Versions Polymer: v1.7.0 webcomponents: v0.7.22 This might be a dup of issue #4160 I don't think so as I'm trying to access it in the ready callback and I'm using Polymer 170. Can't reproduce. Go to polymer.html and look into _marshalIdNodes: this is there nodes are assigned to $. Try to debug what's happening there in your case. Ok so I've recorded the video of the issue. Ok so the problem was with the polymer generated method in starter kit. _routePageChanged: function(page) { this.page = page || 'view1'; }, Removing the OR condition solved it. Leaving it was causing that page to bug out and either not get properly initialized, or not get initialized at all (not sure which). In case of this particular page it was always defaulting and initializing view1 even though the page was selected in iron-pages and it's ready callback was obviously being triggered. This was isolated to this page and like I've described in the original issue the problem could be resolved by removing imports until there was three, and one of them was not vaadin-grid - weird thing indeed. Logging output all trough polymer I couldn't pinpoint what was the underlying issue, but I'm not sure if this is polymer (although it seems like it is) or the routing element or something else. If you guys need further info let me know. How are you using the Vadin grids? Are you laying out all your pages? And what version of Polymer? I ask because I had a similar issue with iron pages caused by the way I prefer to keep data and repeat visual presentations I mapped large value objects and had my own selectedView API that would only layout what was visible and I held state in the maps, lining up the potential next set of potential views updating duplicate data way way behind the scenes in inline promises and via a blobject It made iron pages nuts The first page some times would just not be there after initial load; tap a new tab or scroll view out and it lost the "element" Really the it lost the data that for my use cases was indistinguishable from the view. Iron pages had no idea stuff was lost. I worked around it in 1.2--1.5, using the array selector, which I loved--and with a set of factory arrays and templatized shards of templates that I just put in the Dom myself It let me fill up to 25 2500 box grids with literally 100,000s of thousands of data points But I had to know the state of everything, to keep the grid data running, which included my own more active approach to object garbage collection Since I was calling new over and over to fill my views, I would crush the ai buffers that map for browsers Plus the movement of firstchildelemebt to parent node in the w3 spec made it so that independent management children required and still requires web wide, independent handling of first child So, the point, if grids are stressing the browser ai to manage garbage and build your nodes from their own indexes/uri shards You will lose first pages in traditional show me/hide you template scenarios (just worked around issue in a string literal template project--if I put all my Dom in the Dom-and then parsed for myself I had to take special care of first child/any deviation from a well patterned tree If the grids are stressing your browser cache which the animation spinning waiting thing we indicated (they make it worse actually, especially if you don't unify your event handlers and keep them at bay) You're going to lose stuff in the Dom for a while My guess is vaadin is causing the stress aware of it and has a limited work around Those limits were surpassed when you were importing a lot of junk The work around there is to 1. Recognize your imports trickle down (usually) so If parents or pseudo parents in the treeAPi. Sent from my iPhone On Nov 25, 2016, at 9:20 AM, lnenad<EMAIL_ADDRESS>wrote: Ok so the problem was with the polymer generated method in starter kit. _routePageChanged: function(page) { this.page = page || 'view1'; }, Removing the OR condition solved it. Leaving it was causing that page to bug out and either not get properly initialized, or not get initialized at all (not sure which). In case of this particular page it was always defaulting and initializing view1 even though the page was selected in iron-pages and it's ready callback was obviously being triggered. This was isolated to this page and like I've described in the original issue the problem could be resolved by removing imports until there was three, and one of them was not vaadin-grid - weird thing indeed. Logging output all trough polymer I couldn't pinpoint what was the underlying issue, but I'm not sure if this is polymer (although it seems like it is) or the routing element or something else. If you guys need further info let me know. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread. I'm using 1.7, and I have nowhere near that amount of data in the page that was causing problems. I've tens of rows with a couple of columns. I don't think this was stress related as the app is still very light and I see no reason that it would cause any issues in browser gc. @lnenad Are you still experiencing this issue in your application? If so, could you reproduce it in a JSBin such that we can test and debug it? I see this problem when I use the ECMAScript 6 keyword "class" instead the info objected passed in Polymer(info) function. For example this: addEventListener('WebComponentsReady', function () { class UserView extends Polymer.Element { static get is() {return 'user-view'} ready(){ var grid = this.$.grid; } customElements.define(UserView.is, UserView); } } will give this.$.grid undefined @vasstavr You need to call super.ready(): http://jsbin.com/cefiyuqixu/edit?html,console,output For more info see #4529 and #4412 @TimvdLippe Thanks!!! Closing per https://github.com/Polymer/polymer/issues/4175#issuecomment-320149642 We can reopen and investigate once we have a runnable JSBin
GITHUB_ARCHIVE
Algorithms for Reinforcement Learning Errata for the printed book Last update: May 18, 2013 Page numbers refer to the printed copy. The online version (the “draft”) is up-to-date. Thanks to my PhD student, Gabor Bartok, who has found many of these errors. - p. xi. Section 2) should be Section 2 (no closing parenthesis) - p. 1. The dot should be in between the bars in the definition of the infinity norm, not on the top. That is, ∥⋅∥∞ is the intended form and not ||∞. Also, in “which, if θ, which” the part “which, if θ” should be deleted. - p.2. The footnote from p.5 explaining the meaning of “almost surely” should be moved - p.5. In the example on gambling the personal pronoun “his” should be replaced by - p.9. In Eq. (1.14) on the right-hand side of the equation Q(y,π(x)) should be Q(y,π(y)). - p.12. In footnote 1, add “if” before “it”. - p. 21. line 1: “then” should be “than”. - p. 22. The text “goal is to approximate the value function V underlying ” should - p. 23. Delete “be” from “is no longer be guaranteed”. After θ(λ) in the middle of page delete “.”. The phrase “using V θ” should be “ using the chosen features φ”. - p. 25. “some methods using which” should be “some methods that avoid” - p. 32. The word “complicate” (in the middle of page) should be “complicated”. - p. 40. The text “Gittins (1989) has shown” should be “Gittins (1989) showed”. - p. 43. The 4th displayed equation and the text surrounding it should be deleted. This is the equation that says that RT UCRL2(δ) = O(D2||2|| log(T∕δ)∕ε+εT). This equation holds (under the cited conditions), but it does not lead itself to a logarithmic regret - p. 47. On line 4 of the 1st paragraph of Section 3.3., “optimalas” should be “optimal as”. On the same page, on line 3, after Eq. (3.1): “, Algorithm 12 the pseudocode of Q-learning.” should start with a full stop and is missing the word “shows”. So, the text should be “. Algorithm 12 shows the pseudocode of Q-learning.”. - p. 48. Section 3.2 is mentioned twice in the same sentence (around the middle of the page). The second occurence should be deleted. - p. 56, Algorithm 16, line 7. The correct update equation is b ← b + Rt+1 ⋅ z [Tom - p. 58. The definition of regret should be RT = Tρ* - T [Hamid Reza Maei, - p. 65. The definition of norm is missing the so-called homogeneity condition: For any λ ∈ ℝ, v ∈ V , f(λv) = |λ|f(v). One the same page, in the bottom, “ℓ∞ norms” should be “ℓ∞ norm”. - p. 66. “uniformly bounded” should be “bounded” (when mentioning a single function). - p. 67. “Polish mathematicians” should be singular: “Polish mathematician”. - p. 68. On the top of page, “Assume that T is a γ-contraction.” should go into the next - p. 69. In the line preceding the definition of B(), “uniformly bounded” should be For further information, visit http://www.ualberta.ca/~szepesva/RLBook.html.
OPCFW_CODE
4 edition of The Microsoft guide to managing memory with DOS 5 found in the catalog. The Microsoft guide to managing memory with DOS 5 |LC Classifications||QA76.9.M45 G66 1991| |The Physical Object| |Pagination||xi, 191 p. :| |Number of Pages||191| |LC Control Number||91023054| The Ultimate Guide to Windows Server from Azure to the design of Windows Server , Microsoft can help customers benefit from some of the same cloud efficiencies in their own datacenters. For some organizations, this requires reconsidering the role of hardware and software in operations. A software-defined datacenter evolves. Windows 10 memory management I have a recently built PC w/ a Gigabyte GA ZX Gaming 7 MOBO, an Intel 6th gen i7 K 4 Gig processor, 2 Samsung GB Pro M.2 NVMe Internal SSDs, 2 Samsung 1 TB Evo SATA 3 SSDs, an GeForce GTX 6 GB Vid Card, 2 x 6 TB WD SATA HDDs, and 32 GB of Crucial DDR4 RAM. Microsoft's chat and collaboration platform Teams may have arrived some time after Slack, but thanks to its integration with Microsoft , has a few tricks of its own up its sleeve. Segmentation• Memory-management scheme that supports user view of memory i.e. a collection of variable-sized segments, with no necessary ordering among segments Users view of a program Basic Method• Segments are numbered and are referred to by a segment number i.e. a logical address consists of a two tuple. The kernel is the central module of an operating system (OS). It is the part of the operating system that loads first, and it remains in main e it stays in memory, it is important for the kernel to be as small as possible while still providing all the essential services required by other parts of the operating system and kernel code is usually loaded into a. Managing Memory ‐ Chapter #6 Amy Hissom Key Terms Bank — An area on the motherboard that contains slots for memory modules (typically labeled bank 0, 1, 2, and 3). Burst EDO (BEDO) — A refined version of EDO memory that significantly improved access time over EDO. BEDO was not widely used because Intel chose not to support it. BEDO memory is stored on pin DIMM. Job Search and Career Checklists Together by your side The wisdom of illumination Writing against the curriculum Financial statement of the expenses of the election of members to Parliament, S. Horrocks & E. Hornby, June 1818 Act respecting Indians The foundations of the University of Salford In His Steps The Worlds Submachine Guns, Vol. 1 The best instruction book ever! Children, adults, and shared responsibilities One of the most significant features of MS-DOS 5 is its ability to effectively use extended and expanded memory to break the k barrier. This guide is aimed at DOS 5 users and provides information on the different types of available memory, describes how memory works and how to install it. Get this from a library. The Microsoft guide to managing memory with DOS 5: installing, configuring, and optimizing memory on your PC. [Dan Gookin]. The translation between the bit virtual memory address that is used by the code that is running in a process and the bit RAM address is handled automatically and transparently by the computer hardware according to translation tables that are maintained by the operating system. Any virtual memory page (bit address) can be associated with any physical RAM page (bit address).Windows Advanced Server: 8 GB. Since DOS has given way to Microsoft Windows and other bit operating systems not restricted by the original arbitrary KiB limit of the IBM PC, managing the memory of a personal computer no longer requires the user to manually manipulate internal settings and parameters of the system. How to manage memory with DOS (MS-DOS Featuring DOS ) by Mark Minasi. DOS has greatly expanded the range of things that DOS encompasses: data recovery through the UNDELETE and UNFORMAT programs, all-around user friendliness through the DOSSHELL program, general utilities via the DOSKEY command-history option, and new capabilities of MODE and other commands. • Available in ” or 15” PixelSense Display • High-speed Intel processors (dual-core and quad-core available) • Up to NVIDIA GeForce GTX graphics • Up to 17 hours of battery life • Powerful enough to run professional-grade software and play PC games • New USB-C port • Starting at lbs (1, g) including keyboard • Runs Windows 10 Pro. Addeddate Identifier microsoft-ms-dos-6 Identifier-ark ark://t2mh Ocr ABBYY FineReader Pages Ppi Scanner Internet Archive HTML5 Uploader Download this app from Microsoft Store for Windows See screenshots, read the latest customer reviews, and compare ratings for Intel® Optane™ Memory and Storage Management. Repairs the RAID-5 volume with focus by replacing the failed disk region with the specified dynamic disk. rescan: Locates new disks that may have been added to the computer. retain: Prepares an existing dynamic simple volume to be used as a boot or system volume. san: Displays or sets the storage area network (san) policy for the operating. When you start typing in the To, Cc, and Bcc fields in Outlook, you'll see suggestions appear based on what you've suggestions are broken into two categories: Recent People and Other names and addresses that appear in Recent People are stored in the Auto-Complete List. Outlook builds the Auto-Complete List by saving the names and addresses you've previously. Microsoft included utilities in DOS 5 and DOS 6 to do this. They were called and EMMEXE. Some power users preferred to use third-party memory managers instead. More on those in a bit. parameters. For general use, these lines in suffice to enable and take full advantage of : device=c:\dos\ Search the world's most comprehensive index of full-text books. My library. DOS the Easy Way Book (updated for XP, Vista, & Win7): The book DOS the Easy Way has long been one of the most popular books on DOS. Now it is available in downloadable form. It has been desktop published for attractive, book-style printout and it includes all of the original text and appendixes (including the big chapter on batch files). The Expanded Memory Specification (EMS) is a standard developed by Lotus, Intel and Microsoft. Expanded Memory can be either memory on an memory expansion card or a part of the main memory. The specification describes that this memory can be used by mapping a 64 kB large part to the Upper Memory Area between kB and 1 Mb. With MS-DOS 5 a. o Operating system cannot anticipate all of the memory references a program will make • Sharing Allow several processes to access the same portion of memory Better to allow each process access to the same copy of the program rather than have their own separate copy ECS (Operating Systems) Memory Management, 5. I have a Surface Book with the Core-i7/dGPU and the GB HDD. I currently am using the surface book as a desktop attached to the surface dock and 2 24" Dell UD monitors. i've been using the surface book since it came out and while the firmware/software updates have been steadily making it better it's still far from being a capable workstation in my eyes. Memory Management Architecture Guide. 01/09/; 29 minutes to read +7; In this article. Applies to: SQL Server (all supported versions) Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics Parallel Data Warehouse Windows Virtual Memory Manager. The committed regions of address space are mapped to the available physical memory by the Windows Virtual Memory. Microsoft Press books, eBooks, and online resources are designed to help advance your skills with Microsoft Office, Windows, Visual Studio.NET and other Microsoft technologies. A disk operating system (abbreviated DOS) is a computer operating system that resides on and can use a disk storage device, such as a floppy disk, hard disk drive, or optical disc.A disk operating system must provide a file system for organizing, reading, and writing files on the storage disk. Strictly speaking, this definition does not apply to current generations of operating systems, such. 8: Memory Management 4 MEMORY MANAGEMENT • The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. • Logical address – generated by the CPU; also referred to as virtual address • Physical address – address seen by the memory unit • Logical and physical addresses are the same in compile-time and load. The area in memory from KB is called conventional memory or lower memory. This is where most of the work is done. DOS is the memory manager for conventional memory (it controls it). Upper memory area. This section is included because it is important, when you use DOS 5 or higher.Memory Management. Operating systems: uses intelligently size containers. memory pages of segments; Not all parts are needed at once. tasks operate on subset of memory; Optimized for performance. reduce time to access state in memory. leads to better performance! Memory Management Goals. Virtual vs Physical memory.Administrator's Guide for Microsoft Application Virtualization (App-V) MDOP Information Experience Team. Guide. Microsoft Application Virtualization (App-V) helps businesses provide their end users with access to virtually any application, anywhere without installing applications directly on their computers. Applies to: App-V
OPCFW_CODE
Rethinking Character Journals on Nov. 13, 2013, 11:56 a.m. For years now in my RPG campaigns that I run I have been offering players XP (or the system equivalent) for doing character journals. What this means is that after each session a player can go on the game's wiki and write something about the session for game rewards. This has worked out fairly well, and it's more popular among some players than others. But after being automatic in campaigns for so long, recently I've been taking a step back and evaluating the process again. So why character journals? Really, upon reflection, in theory they serve to purposes. For players they are an opportunity to express bits of their character that wouldn't otherwise come out during the session. And this can help role-playing. For the GM, on the other hand, they're an instrument to gauge what parts of the session the players found noteworthy or otherwise gain insight into how the players are viewing the campaign. And as I said before, it works well enough. But there are also times when it doesn't work so well. Sometimes character journals end up being dry recaps of the series of events from the previous session. And these neither demonstrate characterization nor provide much of a gauge to the GM. For that reason, I've been thinking about character journals, and if there might be a better way to serve the two purposes the journals: An outside-the-play-session venue to express one's character, and a mechanism to gauge player interest and feedback. One idea I have been considering is that of character take-aways and session questions. They would work like this: At the beginning of every session each player can give a character take-away from the previous session. This can be posted on the wiki like a journal, or just given orally before the start of the session proper. In it they can point out one thing their take took away from the events of the last session. This might be a lesson learned, some observation about another character or an event--anything really. This would give a very explicit mechanism for demonstrating something about their character. This might add to the role-playing. Session questions, on the other hand, would be asked at the end of each session and then answered on the wiki or at the beginning of the next session. Basically the GM would ask two questions for which player feedback would be useful. And then players can answer them, having the time between sessions for thoughts on the matter. This would also allow the GM to more easily direct questions at areas where gauging player feedback would be the most useful. This also may more easily engage player participation in busy players, since little between-session time is required.
OPCFW_CODE
Do snakes stilt? NEED SERIOUS ANSWER? The lack of limbs does not impede the movement of snakes, and they have developed several different modes of locomotion to traffic with particular environments. Unlike the gaits of limbed animals, which form a continuum, respectively mode of snake locomotion is discrete and distinct from the others, and transitions between modes are abrupt. Lateral undulation See also: Lateral undulation Lateral undulation is the sole mode of aquatic locomotion, and the most common mode of earthly locomotion. In this mode, the body of the snake alternately flexes to the left and right, resulting in a series of rearward-moving 'waves'. While this movement appears rapid, snakes hold been documented moving faster than two body-lengths per second, often much smaller amount. This mode of movement is similar to running in lizards of the same mass. Terrestrial lateral undulation is the most common mode of environmental locomotion for most snake species. In this mode, the posteriorly moving waves push against contact points in the environment, such as rocks, twigs, irregularities in the soil, etc. Each of these environmental objects, contained by turn, generates a reaction force directed forward and towards the midline of the snake, resulting in forward thrust while the Banded the deep snake, Laticauda sp. lateral components cancel out. The speed of this movement depends upon the density of push-points in the environment, with a atmosphere density of about 8 along the snake's length being ideal. The tidal wave speed is precisely the same as the snake speed, and as a result, every point on the snake's body follows the path of the point ahead of it, allowing snakes to move through very dense shrubbery and small openings. When swimming, the waves become larger as they move down the snake's body, and the wave travels backwards faster than the snake moves forwards. Thrust is generate by pushing their body against the water, resulting in the observed slip. In spite of overall similarities, studies show that the pattern of muscle activation is different within aquatic vs terrestrial lateral undulation, which justifies calling them separate modes. All snakes can laterally undulate forward (with backward-moving waves), but solely sea snakes have been observed reversing the shape, i.e. moving backwards via forward-traveling waves. A Mojave rattlesnake (Crotalus scutulatus) sidewinding See also: Sidewinding Most often employed by colubroid snakes (colubrids, elapids, and vipers) when the snake must move in an environment which lacks any irregularities to push against (and which so renders lateral undulation impossible), such as a slick mud flat, or a sand dune. Sidewinding is a modified form of lateral undulation in which all of the body segments orient in one direction remain in contact with the ground, while the other segment are lifted up, resulting in a peculiar 'rolling' motion. This mode of locomotion overcomes the slippery nature of sand or mud by pushing rotten with only static portions on the body, thereby minimizing slipping. The static nature of the contact points can be shown from the tracks of a sidewinding snake, which show respectively belly scale imprint, without any smearing. This mode of locomotion have very low caloric cost, less than 1/3 of the cost for a lizard or snake to move the same distance. Contrary to popular beliefs, at hand is no evidence that sidewinding is associated with hot sand. Concertina locomotion See also: Concertina movement When push-points are absent, but there is not adequate space to use sidewinding because of lateral constraints, such as in tunnels, snakes rely on concertina locomotion. In this mode, the snake braces the posterior portion of its body against the tunnel wall while the front of the snake extends and straightens. The front portion then flexes and forms an anchor point, and the posterior is straightened and pulled forwards. This mode of locomotion is slow and very demanding, up to seven times the cost of laterally gently sloping over the same distance. This high cost is due to the repeated stops and starts of portions of the body as well as the necessity of using alive muscular effort to brace against the tunnel walls. Rectilinear locomotion See also: Rectilinear locomotion The slowest mode of snake locomotion is rectilinear locomotion, which is also the only one in which the snake does not requirement to bend its body laterally, though it may do so when turning. In this mode, the belly scales are lifted and pulled forward before individual placed down and the body pulled over them. Waves of movement and stasis pass poste
OPCFW_CODE
Once upon a time I was a regular Oracle Forms programmer (and sometimes still current). These days I spend most of my time with Application Express. This makes me happy as I did enjoy mod_plsql - an ancestor (of sorts) of Apex. Occasionally I notice some parallels between the two, even more occasionally I get around to writing an entry for the world to see - a strange urge for some but it seems that people read even more mundane topics. There are many attributes available within the Apex environment. By attributes I mean little boxes in the various wizards ready for my to type something in. Sometimes it seems overwhelming. Then I remind myself how flooded with settings the Forms environment must seem. Of course I snap myself back to normal when I think about what I've seen of JDeveloper. Have you ever wondered what some of these settings do? Recently I was creating a copy of a data entry form within Apex so I could present a cut-down / read-only version of the page. There were some fields that instead of being Select Lists, I needed to display their descriptive value - not the return value that is stored in the column. There are a number of solutions to this problem, as with most problems. One solution I came to involved utilising the "Post Calculation Computation" attribute of the item. This means that after I source the item from the database column, I can transform it's value into something else. The obvious solution here would be to pass the value to a function that determines the descriptive form of the value - from some sort of reference code table. I mentioned forms programming before, right? Immediately I thought of post-query triggers and the pros and cons behind various coding techniques in these triggers. First and foremost was the very same practice of taking a value and converting it to a description. This was an expensive task as not only did it require an extra hit on the database, you needed another round trip from Forms runtime to the application server. The better solution was to incorporate the request within the query - perhaps via a key-preserved view. The same rings true within Application express. Sure, we don't have another round trip between servers since all the action is happening on the database; however it still requires another select statement to be executed. For a dinky (a Aussie/British colloquialism meaning small and insignificant) little Apex page, what's an extra hit on the ever powerful Oracle database? Perhaps try see what happens when scaling your request to thousands of users. So perhaps some of our old habits can carry on to this modern day programming tool? I'm certainly not saying this post calculation attribute is not useful. I have another field populated via a list manager with a popup lov. This means the values are separated by a colon. In my application, this field holds a list of e-mail addresses. When I want to present this list to the user in a pretty format, I can use this attribute to convert it to something suitable for a HTML page: REPLACE(:P1_EMAIL_LIST, ':', '<br>') Of course if you wish to do this, you may need to ensure your item type does not convert special characters. It seems my plane is about to call for boarding, so I'll save you all from further ramblings... for now. Enjoy your weekend.
OPCFW_CODE
Hierarchical auto-numbering (with three levels) in Excel What if we have a three-level hierarchy and need to enumerate only "B"s (according to the pattern), but some "C-"s interfere with "B"s. Problem: to get column B result from a given column A. A+ B 1 B 2 С- B 3 A+ B 1 С- B 2 A+ С- B 1 B 2 B 3 P.S. The task arose from the need to enumerate complex hierarchy in Excel. Imagine that A - level 1; B - level 2; C - level 3. (with some abuse of logic in the example above as C- in the pattern goes after A, which in practice is usually not the case). The simple case of two-level hierarchy is shown here. I also went with a helper column (as much as I detest them personally) to show the row of each A+ Put this in C1: =ROW(A1) Put this in C2: =IF(A2="A+",ROW(A2),C1) It uses an expanding range with a rebasable starting point. Drag down as far as your data goes. Put this in B2: =IF(OR(A2="C-",A2="A+"),"",IF(A1="A+",1,MAX(INDIRECT("B" & C2 & ":B" & ROW(A1)))+1)) Drag down as far as your data goes. Hope that helps. Here are the results I received: A+ 1 B 1 1 B 2 1 C- 1 B 3 1 A+ 6 B 1 6 C- 6 B 2 6 A+ 10 C- 10 B 1 10 B 2 10 B 3 10 Easiest would be to add in two intermediary helper columns, the first of which we'll call column C. Here, we will count only which "A+" we're on, like so [starting at C2; C1 is hardcoded as 1]: =IF(A2="A+",A1+1,A1) This will increment every time a new row has "A+" in column A. Then column D will track the highest # reached so far, for that iteration in column C [starting at D2; D1 is hardcoded at 1]: =IF(A2="A+",0,if(A2="B",B2,D1)) This will restart at 0 for each new "A+", and for each "B" it will take the value shown in column B. Then for each "C", it will simply repeat the value from the row above (the previous "B" reached). Finally you can put in your sort, as follows [starting at E1]: =IF(A1="B",B1,"") This will show BLANK for either "A+" or "C", and will show the B-count if column A = "B".
STACK_EXCHANGE
Protocol providers define how xMatters accesses servers for outgoing notifications. While user service providers restrict protocol selection at the company level, the super administrator can configure protocol providers that are available throughout xMatters. xMatters supports the following industry-standard protocols: These protocols are used to send notifications or create conferencing sessions over a cloud-computing provider: - HTTP Conference: Protocol used to create conferencing sessions using a voice cloud provider other than xMatters. - HTTP Generic: Protocol used to send SMS notifications via a cloud SMS provider other than xMatters. - HTTP xMattersConference: Protocol used to create conferencing sessions using the xMatters voice cloud. - HTTP xMatters SMS: Protocol used to send SMS notifications via the xMatters SMS cloud. - GCM: Used to send messages via the Google Cloud Messaging service, including push notifications to the xMatters Android application. - Voxeo: Protocol used to send voice notifications via the Voxeo online service. This protocol currently supports only events injected using an xMatters communication plan. These protocols are used for two-way email message communications to any desktop computer or email-capable hand-held device: - SMTP: Simple Mail Transfer Protocol, the de facto standard for transmitting email over the Internet. These protocols are not recommended for sending SMS messages to mobile phones as there is no guarantee that the message will ever arrive on the intended device. If the message does arrive, it will usually result in a one-way message where no response is ever returned to xMatters. No tracking information is returned to xMatters during the notification process, and a succession of email servers can be involved in each transmission. SNPP and WCTP These protocols are the latest two-way network-based protocols for use with pagers and SMS text providers: - SNPP: Simple Network Paging Protocol, used to send notifications to a pager over the Internet. - WCTP: Wireless Communication Transfer Protocol, used to send and receive responses over the network. These protocols are recommended as the first protocols to use when xMatters is communicating with text devices as they provide a receipt ID for each notification and, if the provider supports two-way notifications, xMatters can check for responses from the target user on their device. Some providers also provide xMatters with additional information, such as read receipt information that can be logged when the user reads the message on the device, even if they do not respond. Another option when configuring protocols is to use an SMS text message aggregator company. xMatters can communicate with these companies via a two-way protocol (typically SMPP/SNPP/WCTP) and will guarantee delivery of the text message to a target mobile phone; in most cases, the aggregator will also handle all responses back into xMatters. These aggregators charge for this service, but will provide service-level agreements for the message delivery and response. Any combination of the above protocols can be used with xMatters, and for each service provider, the administrator can define the preferred order of the protocols. xMatters will attempt to send the message out using the first protocol; if it fails, the attempt is logged and xMatters moves on to the next protocol. The key to configuring xMatters for your text message service providers is to find out what protocols they support, and to implement as many of them as possible so that you have redundancy in place. Super administrators can manage the available protocol providers using the web user interface. - Click the Admin tab. - On the Administration menu, click Protocol Providers. - xMatters displays a current list of protocol providers for the system. - From the Protocol Providers list, do any of the following: - To modify an existing protocol provider, click its name in the list to view its details. - To remove a protocol provider from the list, select the check box next to the protocol providers you want to remove, and then click Remove Selected. - To add a new protocol provider, click the Add New link, click the Provider of Type drop-down list and select a Provider Type, and then specify the details. For details on configuring each Protocol Provider, see Protocol provider details reference. You can help optimize the throughput of your xMatters node deployment by adjusting the maximum session size for protocol providers. The session size represents the number of notifications transacted within a single session. For example, if you set the maximum session size of an email protocol provider to 20, and there are at least 20 notifications to dispatch, the device engine connects to the email server, sends 20 notifications, and then disconnects. To tune the performance of protocol providers, ensure that the session size is as large as possible for each protocol provider in use. For email providers in particular, a session size of 20 to 50 is recommended. Protocol providers that support text messages include a Maximum Message Length setting. This setting specifies the maximum number of characters in a single notification the protocol provider will handle before truncating the remaining text. There are two separate portions to the message text: the message body and the appended text. The message body is the actual notification content and the appended text contains the notification key, response choices, etc. If the Maximum Message Length setting is less than the total number of characters in the appended text portion of the message, the recipient will not be able to reply to the notification. In effect, the notification becomes one-way. Consider the following examples: - Maximum Message Length setting: 100 characters - Message body length: 50 characters - Appended text: 150 characters In this case, the appended text is discarded because it exceeds the maximum message length setting. The user will receive the entire message body of the notification because it is less than the maximum message length, but will be unable to reply because the appended text portion of the notification is missing. - Maximum Message Length setting: 350 characters - Message body length: 150 characters - Appended text: 300 characters In this case, the entire appended text portion of the notification will be included, but the message body will be truncated to 50 characters. The user will receive the notification and will be able to reply because the entire appended text portion was included in the notification.
OPCFW_CODE
Azure Data Explorer High Ingestion Latency with Streaming We are using stream ingestion from Event Hubs to Azure Data Explorer. The Documentation states the following: The streaming ingestion operation completes in under 10 seconds, and your data is immediately available for query after completion. I am also aware of the limitations such as Streaming ingestion performance and capacity scales with increased VM and cluster sizes. The number of concurrent ingestion requests is limited to six per core. For example, for 16 core SKUs, such as D14 and L16, the maximal supported load is 96 concurrent ingestion requests. For two core SKUs, such as D11, the maximal supported load is 12 concurrent ingestion requests. But we are currently experiencing ingestion latency of 5 minutes (as shown on the Azure Metrics) and see that data is actually available for querying 10 minutes after ingestion. Our Dev Environment is the cheapest SKU Dev(No SLA)_Standard_D11_v2 but given that we only ingest ~5000 Events per day (per metric "Events Received") in this environment this latency is very high and not usable in the streaming scenario where we need to have the data available < 1 minute for queries. Is this the latency we have to expect from the Dev Environment or are the any tweaks we can apply in order to achieve lower latency also in those environments? How will latency behave with a production environment like Standard_D12_v2? Do we have to expect those high numbers there as well or is there a fundamental difference in behavior between Dev/test and Production Environments in this concern? Did you follow the two steps needed to enable the streaming ingestion for the specific table, i.e. enabling streaming ingestion on the cluster and on the table? In general, this is not expected, the Dev/Test cluster should exhibit the same behavior as the production cluster with the expected limitations around the size and scale of the operations, if you test it with a few events and see the same latency it means that something is wrong. If you did follow these steps, and it still does not work please open a support ticket. Please also have a look at setting the https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/batchingpolicy for your tables/databases. It affects the internal batching performed by the ingestion service when streaming avenue is not available. You can set the batching policy as low as 10 seconds. This policy takes 5-10 minutes to take effect. I did enable streaming on the cluster but not explicitly on the table. And we did not configure an IngestionBatching Policy. So this means, if we don't specify it and have a low volume of data, ADX waits for 5 minutes by default until it performs the writes? We just tested it. Streaming was only activated on the cluster but not on the table itself. Just running .alter table policy streamingingestion enable was sufficient to fix the issue. Thank you for pointing this out. Thanks Markus for the confirmation. This is correct, the default batching policy for the queued ingestion is five minutes.
STACK_EXCHANGE
I made the trek to Columbus yet again for my annual visit to Ohio LinuxFest, and once again I was impressed by a good event. I took the afternoon off from work to drive down from Michigan (about a 3 hour drive) and made sure to get there on time for the opening Keynote, which was Karen Sandler from the Software Freedom Conservancy on “The Battle Over Our Technology”. By an interesting coincidence I had brought her up in a discussion the day before on why I would never trust IoT security if the code was not available. Karen has always been very open about sharing her experience with getting a pacemaker installed, and trying to get a look at the code (which she couldn’t, because it is proprietary). And we have since had a recall that made about 500,000 people go the their doctor’s offices to get a code update because the proprietary code was very insecure. In talking about the importance of Open Source Karen brought up the meta-issue that it is not just about the practical issues of efficiency, but also about moral and ethical issues. After that we had a nice happy hour sponsored by Fusion Storm that took place in the vendor room, and I got spend some time with 5150, Verbal, and John Miller while enjoying the Nacho bar, and eventually made my way to my room for the night. Saturday started off strong with a keynote from Máirín Duffy, “Who Cares if the Code is Free? User Experience & Open Source”. Máirín is a UX expert working on the Fedora Project, and really got into the design issues with Open Source, and made a strong pitch for getting people involved outside of coding, and in particular how to get involved in UX. I appreciated this because a healthy Open Source ecosystem requires a lot of different skills, and in my view the idea that coders are the only ones who matter is a kind of sickness in our ranks. After that, there were 4 tracks: - Sysadmin and Development As you might expect, the Security track got most of my attention, and I have to say I was impressed by the speakers there. The first was Kent Adams from SIP.US on VoIP Security Basics. As is usual in the area of Security, none of this was exactly rocket science, but when your phone service comes via Internet Protocol you have all of the usual security issues, such as how your firewall is configured, who might be sending packets your way, and is your software patched and up-to-date. It was a good talk, and Kent was a very engaging speaker. After that, Tom Kopchak from Hurricane Labs had a talk called “Building a Malware Analysis Lab With Open Source Software”. He talked about using open source tools like Squid, Snort/Suricata, and pfSense, and tying them together with some scripting. Then it was time to break for lunch. After lunch I started with Roberto Sanchez. Last year he did a very good talk about how he prepares his CS students by getting them involved in tools and practices like using GitHub, making pull requests, and so on. which I really loved. This year, his talk was “Secure Cloud: Linode with Full Disk Encryption”. Linode is a provider that offers inexpensive Linux virtual servers, and Roberto took us through how to do this securely by setting up your virtual server in an encrypted manner. I think a lot of what he discussed would apply in other areas as well, but taking us through the process step-by-step was valuable. Following that I decided to move over to the /dev/random track to hear Dru Lavigne discuss the new features in FreeNAS 11. Dru is someone I have talked to a variety of conferences over the years, including having breakfast together at Indiana LinuxFest a few years back, so I has glad to see her here. But I went back to the Security track for an excellent talk called “Top 10 Easy Cybersecurity Wins for Linux Environments” by Michael Contino. This was an excellent talk by a very knowledgeable speaker. some of his tips were things I was aware of, but he also brought up some things that were new to me, and I want to follow up on those sometime. After his talk I met up with Joel McLaughlin and Allan Metzler of The Linux Link Tech Show for a little hallway conversation before Joel left, did a pass through the vendor room, then got into a hallway conversation with Michael Contino and a couple of other folks who were at his talk. Then my final Security Track talk was by Cody Hofstetter from Sovereign Cyber Industries, called “Getting Hit by an 18-Wheeler: Privacy and Anonymity in the Modern Age”. Most of what he talked about I knew, but he was such an engaging speaker that I was glad I was there. The final keynote was Tarus Balog of The OpenNMS Group, who gave us the history of how he came to be the CEO of a successful company that sells free software, and the lessons he learned along the way. I first met Tarus when he gave the very first keynote at Indiana LinuxFest some years back, and he is both a great speaker and a great Free Software advocate. His talk was wonderful, and fitting way to round out the talks for day. We then retired to the ballroom for the after-party, and for me an unexpected finish when I won the raffle for a 3-D Printer. I am planning to donate it to a useful charity such a e-nable, which makes hands for children who lack them. Overall, it was a very good conference, and I really enjoyed the speakers. But there is a problem here with diversity. Outside of the Keynoters, the only woman I could see presenting was Dru Lavigne, and I did not see any people of color. And based on my experience programming for Penguicon the last 4 years, this is probably because they just waited to see what proposals happened to come in. I have found that you need to pursue people to get the diversity you need, for whatever reason (I suspect “impostor syndrome” plays a role in at least some cases). For example, last spring I had a great presentation to a packed room by Connie Sieh, who created Scientific Linux. What you might not have known is that I was looking for her over a two year period before I found her (she had retired, old addresses no longer valid, etc.) And there were other people I made a point of going after because I knew what they could do. Another example is Ruth Suehle from Red Hat, who I contacted every year to get a presentation. I talked to the person at OLF who will be booking speakers for the coming year and offered to pass along some of my contacts to help in this. Listen to the audio version of this post on Hacker Public Radio!
OPCFW_CODE
Daily color inspiration from the popular Dribbble shots. Simply really good emails The Community of People Who Design the Web Collecting & combining colors we love. And every tenth color will be something more special & different. Flat Design Gallery & Showcase Free video for anything. Use them in whatever you want, 100% license free. Smashing Magazine is an online magazine for professional Web designers and developers. A collection to resources for designers. A curated online gallery that filters creative content on the web. A curated gallery of stunning portfolio designs. A web design and development blog that publishes articles and tutorials about the latest web trends. The best new & interesting interaction design, curated by Animade. A collection of website components to spark your imagination. Discover the most creative and sophisticated advertising campaigns around the world. A lovingly curated gallery of interactive design. An independent archive of typography. This is a collection of typo signs, shop signs, shop fronts. A showcase of well executed UI elements grouped into sections. French Design Index indexes beautiful web sites that were made in France. Pttrns is a curated library of iPhone and iPad user interface patterns. Line25 is a weblog based around the topic of web design, created and maintained by Chris Spooner. Abduzeedo is a collection of visual inspiration and useful tutorials. Your daily dose of design inspiration. Focusing on the small details that make an app / site stand out. Flat Design Showcase A daily selection of interesting design examples & commentary by curators chosen by AIGA. Worthy of Note is a site aimed at Web Designers & Developers. It offers a wide range of resources to help ... Web Design Inspiration SIte A tumblr of mobile design inspiration. A magazine on beginnings, creativity, and risk Gallery of websites and projects built with Laravel framework Aa division of UnderConsideration, cataloguing the underrated creativity of menus from around the world. Web Awards, Resources & Inspiration For Web Designers Favourite website awards. Web Awards recognising the very best in cutting edge website design. Forrst is a community of people that help each other improve their product design skills. A painstakingly curated presentation of the most well designed apps. Httpster is a showcase of damn hot website design. Curated by Zurb. Minimalist CSS Gallery A great site, bursting with inspirational work! siteInspire is a showcase of the finest web and interactive design. Designspiration is a resource to help you discover and share great design. Online gallery of spectacular one page websites. Search for inspiration Showcase & Discover Creative Work Show and tell for designers. Web Design Inspiration The awards for design, creativity and innovation on the internet. Product landing pages gallery Meeet connects developers and designers working on side projects. A daily leaderboard of the best new products Creativepool, where companies and people connect through their work. Topics related to Web_design: HTML, CSS, JS, layout, UI, graphics, etc. Stack Exchange is a growing network of individual communities, each dedicated to serving experts in a specific Quora is your best source of knowledge. Ask any question, get real answers from real people. Web Designer Forum A community of people in design and technology. A lightweight, mobile-first boilerplate for front-end web developers. A free library of HTML, CSS, JS nuggets. Bootstrap tutorials will help you learn the essentials of Bootstrap, from the fundamentals to advanced topics. Maxmertkit is the most customizable and easiest for usage framework you've ever seen. The Mobile Help Framework to support your iOS app users. Flight is a fast, simple, extensible framework for PHP. Flight enables you to quickly and easily build RESTful A handy collection of cheat-sheets and shortcuts to speed up the work of designers and developers. This app is designed to help beginners grasp the powerful concepts behind branching when working with Git. A free Git & Mercurial client for Windows or Mac. A set of git extensions to provide high-level repository operations for Vincent Driessen's branch model. Free for 5 users, git or mercurial, lightweight code review, mac and windows client. Project management tool with fast, reliable & secure Git, Mercurial & Subversion hosting baked right in. Imagine a single process to commit code, review with the team, and deploy the final result to your customers. Powerful collaboration, code review, and code management for open source and private projects. Push changes to GitHub or Bitbucket. Deploy Changes automatically to your server. Deploy from Github, Codebase and Bitbucket in less than a few minutes... Using Git on the command line can be difficult. Make your life easier with Tower. The easiest way to use GitHub on Mac. Web Content Accessibility Guidelines (WCAG) is developed through the W3C process in cooperation with indivi... This tool attempts to visually demonstrate how readable this colour combination is. We’re a team of accessibility specialists. We look accessibility in the eyes every day. A community-driven effort to make web accessibility easier. A resource to provide information about which new HTML5 user interface features are accessibility supported.
OPCFW_CODE
Over the course of our project, Dr. Tony Hirst from the Open University has helped us think through the way course data might be visualised and built upon. Recently, he’s written a series of blog posts that discuss this in more detail and offers useful feedback to the project team about our APIs. You can read Tony’s blog posts on OUseful.info The Academic Programme Management System (APMS) is designed to allow read-only access to the course data through APIs. However, these APIs allow for very little (if any) search parameters to be used and as such were unsuitable for our use cases when developing applications based around course data. As such it was necessary to import this data into our own data platform, ‘Nucleus’. Before Christmas I was asking for people within the university for ideas about applications that could be developer for them, based around the concept of re-using course data that was already available. After meeting with a member of staff from the School of Computer Science, we developed an idea around an ‘assignment wizard’, that would make use of the course data already available, such as awards, modules, assessments and staff. The purpose of this application is to make the process of writing assignment documentation quicker, easier and more accurate. By tying the application in with assessment data, the assessment strategy delivered within the module will be identical to the strategy as defined in the validated module documents. The expected flow of the applications is : As well as reducing the amount of data that has to be entered by academics (such as learning outcomes, module details etc), the versioning and PDF generation will make the writing process more efficient. Further to this, it allows one lecturer to write a part of the assignment brief, and another to log in and complete the assignment. A follow-up blog post will show the completed application, and start to evaluate it. Back in August, I wrote a blog post that mentioned a paper that I had submitted to The International Conference on Information Visualization Theory and Applications, titled ‘Data Visualisation and Visual Analytics Within the Decision Making Process’. I found out this week that my submission has been accepted as a short paper. It can be downloaded from the Lincoln Repository. The (short) abstract is included below : Large amounts of data are collected and stored within universities. This paper discusses the use of data visualisation and visual analytics as methods of making sense of the collected data, analysing it to assess the affects of historical institutional decisions and discusses the use of such techniques to aid decision making processes. It was suggested to me that a useful blog post would be one explaining some of the benefits of APMS by comparing processes before and after its implementation. It seemed like some visualisations in the form of process maps might be a useful way of doing it, but after looking at some it became apparent that they don’t really represent the changes as well as I had hoped. The fundamental principles of programme management have not changed as a result of the project, and nor was it intended to change them. What has changed, and in my view at least has a big impact on the way we work with programme information, is how we go about recording information and the mechanisms that lie behind the various processes. Process maps make things like modifying a programme still appear fairly convoluted when it comes to the approvals, whereas in fact the workflows in APMS mean that it is fairly simple to either click the button to decline approval (sending it back for further work) or click the button to approve the changes (forwarding it to the next stage). Here, then, are a few examples: Stand Alone Credit Creation To create a new Stand Alone Credit module, there was a Microsoft Word form to complete containing information about the module selected, type of activity, academic school, and so on. There are sections to confirm approval at the end. To accomplish the same in APMS, the module is selected from the list of existing modules, and the other information entered (many items as list selections). The academic confirms College approval and submits it to the Quality team who, having checked it, click the approved button and that’s it. Short Course Creation To create a new short course there was a Microsoft Word Short Course Application form to complete with fields including the course title, level, credit points, course leader, school and confirmation of approvals. Accompanying the application form would be a the Short Course Specification, based on a Microsoft Word template, containing the course title, level, credit points, school, delivery mode, rationale, learning outcomes, learning and teaching strategies, module specification(s) and so on. To accomplish the same in APMS a proposal for a new short course is created and items selected from lists where available (e.g. school, course leader) or information typed in. The modules are selected from the module list (or new ones created) and the proposal submitted through an approvals and validation process. There is no duplication of information needed and each person involved can see all the information related to it. To modify a programme an academic would complete a Programme Modification Form in Microsoft Word. A series of tick boxes would indicate whether all evidence had been provided, external examiner approval gained, revised module specifications provided and more. Details about the school, campus, affected programmes and modified modules would be provided, along with the rationale and summary of the change. New programme and modules specifications would be written and attached, which themselves would also contain details about the school, campus and other duplicative information. A programme modification in APMS starts with a discussion between the academic and a Quality Officer to determine whether the magnitude makes it a revalidation or a modification, and the Quality Officer starts the appropriate workflow. For a modification, certain fields are then available for change by the academic whilst others (that would require a revalidation) are not. Once the changes have been made to the programme and/or modules, the academic submits the modification proposal for approval, and it goes through the workflow getting the appropriate approvals until validated. There is no duplication of information, and everyone can see where in the process the modification is and all information relating to it. As one final example we briefly look at benchmark statements. The format of benchmark statements vary from subject to subject – in some cases they can be presented as numbered items whereas in others they are paragraphs of text. In APMS each item for a subject is presented as an individual item, having been painstakingly extracted from documentation by our project intern, Louise. Statements are presented automatically once the benchmark subject is selected, and the system presents them in a matrix with programme outcomes so that the programme leader can map where each is covered. Previously, each programme leader would need to conduct this exercise themselves and duplicate information throughout the document to make sure it was presented in the appropriate way.
OPCFW_CODE
Hello, I got problem with my Dell inspiron n5110(64 bit) integrated webcam. Since I bought my laptop, I didn't use the webcam but I did tried it long ago to take pictures of me and it was working back then .And now out of no where I tried to use the webcam again but its not working. I tried almost everything by what I saw in dell forum about this issue but nothing worked till now. There is no "Imaging" in my device manager. Also when I tried to test my webcam using "http://www.testwebcam.com/"...."it says cannot find camera" I tried to install drivers for webcam central but it didnt worked either. I use windows 7 Is there a problem in the hardware itself or is the problem just with the drivers? please I need help to get this solved. I hope I dont have to change the integrated webcam... * Sorry for my poor english. I see that you have tried almost all the trouble shooting steps to resolve the issue. However check if you are facing the issue after installing the application given below. I recommend you to uninstall the Webcam from ‘Programs and Features’ before installing the application. Now check if the webcam is working or I request you to run diagnostics using Dell PC diagnostics by clicking on the link below: Under ‘Hardware’ from the listed components under ‘Diagnostic Selector’ select ‘Camera’ and then click on ‘Run Diagnostics’. Follow on screen instruction to run the test and reply to this post with the status to assist you further. Hello DELL-Sujatha K , I think I already tried both installing the driver and the diagnosis but it was not working. I tried to uninstall and install the driver listed above again but still not working. It is always showing "No supported webcam" in dell webcam central software. For the diagnosis I followed the steps and the result says that "A diagnostic category was specified and the applicable device does not exist." I also download the "Dell system detect" but no where in it I can see the integrated webcam. I runned the PC checkp in the Dell system detect and eveything is working fine and all the test are done but I cannot see the integrated camera anywhere, neither on my laptop.. I did forget to mention that yesterday earlier on i formatted my laptop but the webcam problem already existed before formatting. I did format my laptop because of viruses and the bluetooth driver which was not working and I thought that after that all the problem in my laptop would be solved. The other problem are solved and the others drivers are working fine except for the webcam. Can I please know what is the nature of the problem? Is it the hardware that is the integrated webcam itself or its driver? And pleasee help me solve this problem because it isn't working... I suspect it is a hardware issue with the webcam. If you are comfortable you may try reseating the connection of the Webcam. Find the link below for the service manual: You will find the instruction for webcam under Display assembly. Or you may take to any local technician and get that checked. Even after reseating the webcam cable, if it not working you may consider replacing it. If the system is under warranty send me a private message with the system service tag to assist you further. Please click on the link below to check the warranty of the system: hi Sujatha, I followed the every step told in service manual but unable to repair the web cam and mic. What may be the possible cause for the detection failure of the cam and mic ? I would like to know the result of the PC diagnostics. If in case you are getting an error message the Webcam is not detected, I suggest that you reseat the webcam cable. Let me know the system model to provide the link for service manual. Also let me know if the webcam is listed under ‘Imaging devices’ in ‘Device Manager’. If the system model is N5110, find the link below for the service manual. You will find the instruction under Display Panel: I'm also using Dell Inspiron n5110 laptop. (core i7) I'm having same problem as above. I'm using windows 8 operating system and i used integrated web cam and microphone since last 10 months. but now its not detected,. I cant use skype. In dell driver web page also i cant find web cam drivers or software. PC dianosis results didnt worked on my computer. I searched many support web pages, but couldnt find a solution. I'm not satisfied with the DELL support with Web cams and driver softwares :-( even when i'm getting registered with this community web page, on the application list my country is not listed ! ( My country is Sri lanka) I want to know the reason for sudden failure of Web camera on my Inspiron n5110 laptop Please check if the Webcam is listed in the Device Manager. Open the charms bar by moving the mouse to right top corner of the screen and in the search box, type ‘Device Manager’, and then, in the list of results, click ‘Device Manager’. Check if the Webcam is listed under Imaging Devices, if not check under Other Devices. Dear Sujatha K Under Device manager, web cam not listed (Imaging) but earlier it was there in the list. I'm sure it is not detected. due to some kind of a failure isn't it? Thank you for the update. I request you to follow the steps given below: Completely shut the system down. Turn on the system and at the dell logo, tap the F2 key on the keyboard every two seconds. In the BIOS screen press F9 to load defaults and F10 to save. Press ‘ESC’ to exit from the BIOS screen. Next install the webcam application from the link given below: Let me know the status.
OPCFW_CODE
Edited by Cal, Webster, Tom Viren, Zack and 22 others Python is a good programming language for beginners that continues to be a good programming language when you're no longer a beginner. Starting to Program in Python 1Python is an interpreted language, so you don't need a compiler. Linux and Mac operating systems come with Python already installed, so you can run it from the terminal simply by typing "python3". However, in Windows you will want an interpreter to interpret the Python code. Download it from http://www.python.org/download/ . The more user friendly version is python 3.Ad 2After you download the interpreter, read through the instructions for your platform to install it and do so. 3Download a good syntax-highlighting code editor. If you're using Windows, Programmer's Notepad is a good choice. In Linux, every little text editor is a syntax-highlighting editor. For Mac, Text Wrangler is the best option. 4Write your first program: - Open the text editor. - Write below line: #print("Hello, World!") # - Save the file, name it "hello.py". - Open a terminal (or console, in Windows). In windows, you do this by clicking Start->Run and typing "cmd" into the prompt. - Now navigate to the directory where you created your first program and type (without the quotes) "python hello.py" - Now that you've proved your python installation works and that you can write a program, you are ready for more advanced work. 5Read the tutorial.) 6For video classes, visit 7Now that you've got a good overview of python, and you know how to write a basic script, think of a program you'd like to write. Good examples of programs to write at this level are: - Basic checkbook program. - Basic game - The entire python syntax and core language concepts can be learned in 15 minutes, and can be kept in your head at once. If you're not understanding something, it's probably not python causing the confusion, but rather the fact that you're new to the topic. Take a break, let it cook, and come back when you're in a more pliable state. - Text editors are a dime a dozen, so if you don't like the one you use, you can try a different one. Python on Windows ships with its own editor called "IDLE". - Python is probably the easiest language to start out with, however. - In Windows, python isn't usually added to your path initially. You should go ahead and add it. - If python is too hard, try to start out with LOGO, a turtle graphics language. - Python is well-documented, but much of the documentation assumes you're coming to python from another programming language. If python really is your first attempt at programming, buy a book. - Many people tackle programming initially because they want to write games. Get over it in the beginning, write your games after you've gotten a good command of the language. Game programming can be very difficult, so focus your attention first on learning the language by writing other programs that are both easier and useful to you. - Game programming is very difficult! If you're interested in doing more than simple quiz games, you will need a good understanding of mathematics in order to avoid writing a lot of hacky stuff that never really works right. Don't let that scare you, after you've made a few attempts at writing basic games, go learn some mathematics and you'll find the material a lot easier to learn than it sounds. - Python is a structured language as well as an object-oriented language. That means there are functions, which originated in mathematics. It is highly recommended that would-be adult programmers have at least a college algebra level of mathematics. However, this is not required, it just makes learning python (and any other structured language) easier to learn. Categories: Web Programming Recent edits by: Jeff, Daniel Bauwens, Anuj_Kumar1 In other languages: Español: Cómo comenzar a programar en Python Thanks to all authors for creating a page that has been read 189,562 times.
OPCFW_CODE
What's the difference between a Tensorflow Keras Model and Estimator? Both Tensorflow Keras models and Tensorflow Estimators are able to train neural network models and use them to predict new data. They are both high-level APIs that sits on top of the low-level core TensorFlow API. So when should I use one over the other? As @jaromir pointed out - estimators are deprecated and unavailable from Tensorflow 2.16. Use the Keras APIs instead. From the documentation: Warning: TensorFlow 2.15 included the final release of the tf-estimator package. Estimators will not be available in TensorFlow 2.16 or after. See the migration guide for more information about how to convert off of Estimators. Below is the original answer from 2018. Background The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation of a complete model. Keras is similar to the Estimators API in that it abstracts deep learning model components such as layers, activation functions and optimizers, to make it easier for developers. It is a model-level library, and does not handle low-level operations, which is the job of tensor manipulation libraries, or backends. Keras supports three backends - Tensorflow, Theano and CNTK. Keras was not part of Tensorflow until Release 1.4.0 (2 Nov 2017). Now, when you use tf.keras (or talk about 'Tensorflow Keras'), you are simply using the Keras interface with the Tensorflow backend to build and train your model. So both the Estimator API and Keras API provides a high-level API over low-level core Tensorflow API, and you can use either to train your model. But in most cases, if you are working with Tensorflow, you'd want to use the Estimators API for the reasons listed below. Distribution You can conduct distributed training across multiple servers with the Estimators API, but not with Keras API. From the Tensorflow Keras Guide, it says that: The Estimators API is used for training models for distributed environments. And from the Tensorflow Estimators Guide, it says that: You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model. Pre-made Estimator Whilst Keras provides abstractions that makes building your models easier, you still have to write code to build your model. With Estimators, Tensorflow provides Pre-made Estimators, which are models which you can use straight away, simply by plugging in the hyperparameters. Pre-made Estimators are similar to how you'd work with scikit-learn. For example, the tf.estimator.LinearRegressor from Tensorflow is similar to the sklearn.linear_model.LinearRegression from scikit-learn. Integration with Other Tensorflow Tools Tensorflow provides a vistualzation tool called TensorBoard that helps you visualize your graph and statistics. By using an Estimator, you can easily save summaries to be visualized with Tensorboard. Converting Keras Model to Estimator To migrate a Keras model to an Estimator, use the tf.keras.estimator.model_to_estimator method. PS: Keras does handle low level operations, it's just not very standard. Its backend (import keras.backend as K) contains lots of functions that wrap around the backend functions. They're meant to be used in custom layers, custom metrics, custom loss functions, etc. @DanielMöller thanks for your comment! Feel free to edit my answer if you'd like. While what you state is true, it ~seems~ that TF favors tf.keras given the amount of colab / documentation (of what little exists) is written with tf.keras over "canned" tf.estimator.Estimator or custom estimators tf.estimator is built on top of Keras The Keras API is actually more supported than the Estimator API in terms of distributed training. You can read more here: https://www.tensorflow.org/guide/distributed_training. I agree with @hwaxxer. You need to update your answer for Tensorflow 2. @AliSalehi would you be willing to take a crack at editing the answer with an update? I don't have the time these days to keep up to date with the recent changes in TF. You can also write up your own answer and I will edit this answer to link to it. One more update from the docs. "Warning: TensorFlow 2.15 included the final release of the tf-estimator package. Estimators will not be available in TensorFlow 2.16 or after. See the migration guide for more information about how to convert off of Estimators." In my understanding, estimator is for training data on large scale and serving on production purpose, because cloud ML engine can only accept estimator. The description below from one of tensorflow doc mentioned this: " The Estimators API is used for training models for distributed environments. This targets industry use cases such as distributed training on large datasets that can export a model for production. "
STACK_EXCHANGE
module Ringo::Tools # Show a visual representation of the passed in expression. class AstPrinter def print(statement) statement.accept(self) end def visit_expression(statement) return parenthesize(';', statement.expression) end def visit_print(statement) return parenthesize('print', statement.expression) end def visit_var(statement) return parenthesize('var', statement.name.lexeme) if statement.initializer.nil? return parenthesize('var', statement.name.lexeme, '=', statement.initializer) end def visit_binary(binary) parenthesize(binary.operator.lexeme, binary.left, binary.right) end def visit_conditional(conditional) parenthesize('?', conditional.expression, conditional.then_branch, conditional.else_branch) end def visit_grouping(grouping) parenthesize('group', grouping.expression) end def visit_literal(literal) literal.value end def visit_unary(unary) parenthesize(unary.operator.lexeme, unary.right) end # TODO: Add the remaining visitor methods as they are defined. private def parenthesize(name, *args) "(#{name} #{args.map { |arg| arg.respond_to?(:accept) ? arg.accept(self) : arg }.join(' ')})" end end end
STACK_EDU
> Come take a break playing this little game... We get a telnet and a gameboy ROM. ### Part 0 - checking out the game I's a quite simple game where you can move and shoot enemy, which also tries to shoot at you. Each enemy killed gains you current `score` amount of `gold`. You can also open shop and buy items for gold. ### Part 1 - reversing We used [BGB](https://bgb.bircd.org) for dynamic analysis together with Ghidra for static analysis. I've attached our exported ghidra DB. You need [GhidraBoy](https://github.com/Gekkio/GhidraBoy) plugin to open it tho. A placeholder flag string is inside the ROM, but is unused. We didn't think that the bug would be in movement function, so we mostly looked into shop part of the game while reversing. By setting item counts in debugger to 0xFF we noticed that entire menu is glitched. We can move cursor outside of the 15 item range. While reversing this behaviour in depth with ghidra the code looked extremly suspicious. Almost as if this bug was made on purpose. After some time we found how to trigger this bug. You can: - buy item - open use menu - select that you want to throw 1 item - use the item (which decrements count from 1 to 0) - throw the item (which decrements count from 0 to 255) thus you end up with negative amount of an item. ### Part 2 - pwn Long story short: - we setup our score to be exactly 0xc3ca, which points into `item_use_ptrs` c3bf 87 57 addr use_item_0 c3c1 02 58 addr use_item_1 c3c3 81 58 addr use_item_2 c3c5 00 59 addr use_item_3 c3c7 7f 59 addr use_item_4 c3c9 fe 59 addr use_item_5 <= we point at byte 59 here c3cb 7d 5a addr use_item_6 c3cd fc 5a addr use_item_7 c3cf 7b 5b addr use_item_8 c3d1 fa 5b addr use_item_9 c3d3 79 5c addr use_item_10 c3d5 51 56 addr use_item_11 c3d7 ad 56 addr use_item_12 c3d9 f2 56 addr use_item_13 c3db 3a 57 addr use_item_14 c3dd ?? ?? int16_t ?? 6d82 ds "INS(PLACEHOLDER!!!)" - we prepare our gold for next steps to end with exactly E9, which will be `JMP (HL)` opcode - we setup our shellcode at 0xc3ca by moving cursor to our shellcode and tossing X amount of items by doing that we can decrement any positive value (that is <= 0x7F), we can't point our cursor at negative values (or 0). - call our shellcode by using item So here's how our shellcode worked without garbage opcodes: LD E, 0x82 LD D, 0x6d LD HL, 0x5711 # after jump But to make it work with bytes we had at hand and mechanic of only decrementing positive numbers we created: 0x59 => toss 27 => 0x3e # LD A, imm 0x7d => toss 54 => 0x47 # load (lower bytes of flag string pointer shifted right by 1 byte) into A # we do this because flag add ends with 0x82 which is negative 0x5a => toss 44 => 0x2e # LD L, imm 0xfc => => 0xfc # this operation is just a workaround for not being able to touch 0xfc 0x5a => toss 83 => 0x07 # RLC A # shift A left (to fix the pointer to the flag) 0x7b => toss 28 => 0x5f # LD E, A # move lower bytes of flag pointer into E 0x5b => toss 29 => 0x3e # LD A, 0xFA 0xfa => => 0xfa # this operation is just a workaround for not being able to touch 0xfa 0x5b => toss 69 => 0x16 # LD D, imm 0x79 => toss 12 => 0x6d # move higher bytes of flag pointer into D 0x5c => toss 1 => 0x5b # LD E, E 0x51 => toss 2 => 0x4f # LD C, A 0x56 => toss 4 => 0x52 # LD D, D 0xad => => 0xad # XOR L 0x56 => toss 4 => 0x52 # LD D, C 0xf2 => => 0xf2 # LD A, (C) 0x56 => toss 53 => 0x21 # LD HL, 0x5711 0x31 => toss 41 => 0x11 # load address where push DE, call printf exists 0x57 => => 0x57 # into HL gold => => 0xe9 # JMP (HL) Since we're priting where our score was placed we can only print 3 characters as the rest is printed outside of visible screen. So we created our shellcode to not crash the game and we have easily movable string pointer at the beginning of the shellcode. So we used 0x47 to print 3 last characters, then decremented the value to 0x46 to move pointer to print previous characters and so on till 0x41. *Note: actually 0x41 prints 6 characters, since it overflows text and starts overwriting on the left side as well.* 0x41: INS( !!!)
OPCFW_CODE
can't install swm on Ubuntu 15.04 I apologize if this has already been asked I couldn't find any solutions online, I am trying to install swm from here, but i keep getting this error when running make in terminal c99 swm.o -o swm -lxcb -L/opt/X11/lib -L/usr/X11R6/lib swm.o: In function cleanup': swm.c:(.text+0x1b): undefined reference toxcb_disconnect' swm.o: In function deploy': swm.c:(.text+0x43): undefined reference toxcb_connect' swm.c:(.text+0x59): undefined reference to xcb_connection_has_error' swm.c:(.text+0x76): undefined reference toxcb_get_setup' swm.c:(.text+0x7e): undefined reference to xcb_setup_roots_iterator' swm.c:(.text+0xd2): undefined reference toxcb_grab_button' swm.c:(.text+0x114): undefined reference to xcb_grab_button' swm.c:(.text+0x145): undefined reference toxcb_change_window_attributes_checked' swm.c:(.text+0x154): undefined reference to xcb_flush' swm.o: In functionfocus': swm.c:(.text+0x1ba): undefined reference to xcb_get_geometry' swm.c:(.text+0x1d2): undefined reference toxcb_get_geometry_reply' swm.c:(.text+0x4d1): undefined reference to xcb_generate_id' swm.c:(.text+0x52f): undefined reference toxcb_create_pixmap' swm.c:(.text+0x53e): undefined reference to xcb_generate_id' swm.c:(.text+0x561): undefined reference toxcb_create_gc' swm.c:(.text+0x586): undefined reference to xcb_change_gc' swm.c:(.text+0x5a7): undefined reference toxcb_poly_fill_rectangle' swm.c:(.text+0x5dd): undefined reference to xcb_change_gc' swm.c:(.text+0x5fe): undefined reference toxcb_poly_fill_rectangle' swm.c:(.text+0x625): undefined reference to xcb_change_window_attributes' swm.c:(.text+0x639): undefined reference toxcb_free_pixmap' swm.c:(.text+0x64d): undefined reference to xcb_free_gc' swm.c:(.text+0x675): undefined reference toxcb_set_input_focus' swm.o: In function subscribe': swm.c:(.text+0x6fd): undefined reference toxcb_change_window_attributes' swm.c:(.text+0x722): undefined reference to xcb_configure_window' swm.o: In functionevents_loop': swm.c:(.text+0x765): undefined reference to xcb_wait_for_event' swm.c:(.text+0x807): undefined reference toxcb_kill_client' swm.c:(.text+0x858): undefined reference to xcb_map_window' swm.c:(.text+0x8c8): undefined reference toxcb_configure_window' swm.c:(.text+0x8dc): undefined reference to xcb_get_geometry' swm.c:(.text+0x8f4): undefined reference toxcb_get_geometry_reply' swm.c:(.text+0x957): undefined reference to xcb_warp_pointer' swm.c:(.text+0x9aa): undefined reference toxcb_warp_pointer' swm.c:(.text+0x9ee): undefined reference to xcb_grab_pointer' swm.c:(.text+0xa01): undefined reference toxcb_flush' swm.c:(.text+0xa20): undefined reference to xcb_query_pointer' swm.c:(.text+0xa38): undefined reference toxcb_query_pointer_reply' swm.c:(.text+0xa5c): undefined reference to xcb_get_geometry' swm.c:(.text+0xa74): undefined reference toxcb_get_geometry_reply' swm.c:(.text+0xbce): undefined reference to xcb_configure_window' swm.c:(.text+0xbdd): undefined reference toxcb_flush' swm.c:(.text+0xc02): undefined reference to xcb_get_geometry' swm.c:(.text+0xc1a): undefined reference toxcb_get_geometry_reply' swm.c:(.text+0xc72): undefined reference to xcb_configure_window' swm.c:(.text+0xc81): undefined reference toxcb_flush' swm.c:(.text+0xca8): undefined reference to xcb_ungrab_pointer' swm.c:(.text+0xcf8): undefined reference toxcb_flush' collect2: error: ld returned 1 exit status Makefile:18: recipe for target 'swm' failed make: *** [swm] Error 1 any suggestions? I got it! You need to change line 19 in Makefile to @${LD} -o $@ ${OBJ} ${LDFLAGS} dosn't work, I have the same problem and the package is installed. Ok, I will go back in my hole now :) Try it now that I have added all the other headers I'am not JeffMontes ;) But sorry, no, same error. Try now @A.B. Now it will get all the packages matching libxcb*-dev Great, it works @A.B. Glad I could help! I'm making a pull request right now! @JeffMontes Could you please mark this as the answer if it helped? O.K. The pull request was merged! (Yay!) @JeffMontes any feedback?
STACK_EXCHANGE
Find below a new features in QRadar version 7.2.5 which was released for public 6th of June 2015 Domain segmentation introduced in current version based on event and flow collectors, log sources, log source groups, flow sources, and custom properties. From now on you can grant access to domains using security profiles and make sure that domain restrictions are comply with the entire QRadar system. Lightweight Directory Access Protocol (LDAP) providers for authorization. QRadar reads the user and role information from the LDAP server, based on the authorization criteria defined. Moreover you can configure QRadar to map entries from multiple LDAP repositories into a single virtual repository. Centralized log file collection Simultaneously collecting log files from all managed hosts directly to QRadar. Log files contain detailed information about deployment, such as host names, IP addresses, and email addresses. Improved SSH key management Distributing SSH keys during deployment. During upgrade to QRadar V7.2.5, installer replaces the SSH keys that are currently on the managed hosts. Removing or altering the keys might disrupt communication between the QRadar Console and the managed hosts, which can result in lost data. Monitor one or multiple QRadar deployments with Master Console. You can use Master Console to view system notifications, event and flow rates, CPU usage by process, memory usage, and more working data. The feature of viewing all of your system notifications, and other health information about your QRadar host in one place. Introduction of new management screens for adding new managed hosts to your QRadar deployment. This new menu partly replaces the same options from Deployment editor which is Java based client. X-Force Exchange integration with QRadar Use X-Force Exchange to collect and lookup IP addresses and get more information on URLs that were identified by QRadar in events, rules, flows, and offenses. You can send any IP address that is displayed in QRadar to X-Force Exchange. You can also use URLs from events on the Log Activity tab. New feature of share reports with groups of users. Now, add the report to a report group shared with everyone or a group shared only with users who have specific user roles and security profiles. Setting a level of confidentiality for a report, with notification which appears in the report header and footer. Now, also, add page numbers and create reports based on saved asset searches. More advanced search options Use the TEXT SEARCH operator to do full text searches and find specific text in custom properties for events and flows. Use historical correlation when analyzing events loaded in bulk, testing new rules, and re-creating offenses that were lost or purged. Ariel Query Language (AQL) lookup functions In 7.2.5. the new AQL X-Force lookup functions added to query X-Force IP address and URL categorizations. The categorizations can be used in query result data or they can be used to filter events and flows.
OPCFW_CODE
A couple of months ago I began working on a tactical RPG in my spare time. While I'm not ready to reveal details about the story or setting at the moment, I've finished a prototype of the tactical combat mode, and I'd like to talk about how that's come together. The prototype is fully playable, except that there's no artifical intelligence at the moment, so the player has to control both sides until I write some AI, which I'm not going to do until more of the design is locked down. I think I've come up with some cool twists on the tactical layer that will create compelling, varied gameplay challenges, especially once the story hooks start to get added in. Let's start with some of the key pillars I'm aiming for: - combat areas will be procedurally generated - characters won't buy and equip weapons; instead, weapons will be scattered around the combat areas - weapons/item locations will also be randomly generated, so you can play the same "level" many times but face a different tactical scenario every time - decisions should always pose trade-offs; there should be no optimal strategy that will always win - levels will be procedural, but all player actions will be deterministic - no dice rolls - combat will be melee; there are no guns, crossbows, etc. What does that all mean in practice? Let's take a look at a screenshot of what the prototype looks like at the moment and I'll explain the details. Keep in mind that all the art and user interface at the moment are just quickly thrown together placeholders. The goal for the prototype is to make the game fun and answer basic questions like "How many tiles should the grid have?" and "How much damage should attacks do?" So here's a sample layout. So what's going on here? The green circles are "AI" characters and blue circles are the player's party. The current character is highlighted in orange, and they've pulled up an attack that's showing its range: red are tiles that are within range but have no valid target, green are enemies that could be attacked. The white boxes represent items/weapons that can be picked up (there were more when I generated the level but characters have equipped most of them in this screenshot). The rest of the tiles represent a level layout, with the lightest tiles being the floor, the darkest tiles representing "tall" impassible terrain, and the mid-coloured tiles representing objects that are not floor tiles but are shorter than characters (ie. objects characters can see over). If you're familiar with games like XCom you can think of it as floor, medium cover, and tall cover, although since this is intended to be a melee game there's no cover system. It might not look like much, but this area is intended to represent a pub. The bottom left and right areas are booths, in the middle of the room are tables, in the top left is a bar and bar counter, and the top right is a pool table and juke box. Here's an example of how the procedural generation will work in practice: each combat area will be drawn from a template, which in this example is "a pub". There will be a whole bunch of possible pieces that make up a pub, like tables, chairs, entertainment, and so forth. When the level is created, it will choose from a selection of these pieces that go together, creating a layout that's unique to that particular combat encounter. It can be encoded so certain pieces only show up in certain places; for example, maybe a bar counter can only ever appear in the top left corner in order to ensure you don't wind up with a nonsensical layout that's just bar counters all over the place. The goal of the system is to create constantly varying environments and challenges so that players go into each combat scenario facing something new. Once I start to bring the narrative components online those will be tied in too, so info can be fed into the level generator to line up with story beats. This should also enable me to create a game that still feels fresh to play after several hours without needing a whole team of people creating a large array of unique art assets. So that's more or less the state of the prototype at the moment. All of the core systems are in place, like movement, attacking, items, level layouts, turn order management, and so forth. My main task right now is to add in all the abilities and items so I can start doing more thorough testing to see what works and what doesn't. Once the combat is closer to being done I'll start moving on to other parts of the game, like the hub area that the player will return to between quests, and building systems for narrative and dialogue.
OPCFW_CODE
Android Pentesting 101 — Part 3 by Vaibhav Lakhani and Dhir Parmar Welcome to Part 3 of Android Pentesting. This series is about how you can hack into Android and find vulnerabilities in it using various methods. If you haven’t read parts 1 and 2, I would highly recommend you please read them before jumping to this section. In the third part, the final we aim to cover the Dynamic Analysis of Android Pentesting, along with various tools. Are you ready?? So Dynamic Analysis in Android Pentesting is all about playing with the requests and the responses of what the application is sending to the web server. But here is the catch! Two things to do before capturing the request: - Root Detection Bypass - SSL Pinning Bypass Let’s us talk about them individually. Root Detection Bypass: Rooting is the process of allowing users of the Android mobile operating system to attain privileged control (known as root access) over various Android subsystems. Applications should not be allowed to run on rooted devices. And hence we try out various Root Detection Bypass using Frida and Objection. SSL Pinning Bypass: SSL Pinning is a technique that most application owners implement so that any request sent by the mobile application is not intercepted. SSL Pinning is considered as the first and the most important step in the security mechanism of an application. But due to improper means, SSL Pinning can usually be bypassed. Again Frida and Objection are the best tools that will help us over here. So how do you start with the Dynamic Analysis? Well, the first step is the need for a Rooted Android Phone. You can easily Root any Android Phone using Magisk (the best and most accurate way) or other methods found here. Alternatively, you can use Genymotion, which sometimes does not provide accurate results Alright, so let’s start by taking a live target, let's call the application <Redacted>. Okay so <Redacted> has Root Detection implemented which can be seen in the screenshot below. Now to Bypass, this is very simple: - Download and install Frida & Objection using Hail Frida. - Try connecting Burp Suite and your phone by following this article - Start the Frida Server using: adb shell /data/local/tmp/frida-server & - Open the application and keep the application running in the background. - Find the Application package name using: frida-ps -Ua - Now you need to find the right script. This time we are going to run Root Bypass Script which can be found here. If this Frida script does not work you can find more online. Other Frida scripts can be found here. - The final step is to hook the script using Frida using the command: frida -U -f com.package.android -l D:\frida\fridascript-root.js --no-paus Alternatively, you can use objection to bypass the Root Detection. That can be quickly done using the following commands: - objection -g “com.package.android” explore - android root disable - android root simulate Perfect! Now that we have bypassed the Root Detection, we can move forward. There is still a step left before we can capture the requests which are to bypass SSL pinning. Alright, so to bypass the SSL Pinning we will again use Frida and the SSL Pinning Bypass Script which can be found here. Uh-Oh! But here is the catch you cannot run 2 Frida scripts simultaneously. But No Problem! Just add the SSL Pinning Bypass Script below Root Detection Bypass Script and re-run Frida and see the magic. Dang! Here is how we can capture all the requests! Alternatively, you can use objection to bypass the SSL Pinning. That can be quickly done using the following commands: - objection -g “com.package.android” explore - android sslpinning disable - android sslpinning simulate Oh, wait! The application uses the fingerprint as well to authenticate the user. Let’s try to bypass this as well. The script for fingerprint bypass can be found here. So simply add the fingerprint bypass script below the Root Detection and SSL Pinning Script. Great! So we were able to bypass the Fingerprint as well! So what next, nothing capture all the requests, and try to play with them as you do while testing a Web Application. We have collected and added all the Frida Scripts which we find useful over here. Please feel free to use them. Besides, we also created a Mind Map which you can use and can be found below. That’s all folks for this article! We hope that you enjoyed the entire series! We will be back with another such series and this time it will be iOS Pentesting, where we will show you’ll the methodology of Dynamic Analysis while performing iOS Pentesting.
OPCFW_CODE
Using Python functions in Tkinter.Tcl() I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac). Not all builds of Tk start X11 on OSX. Use a Carbon (or Cocoa) targeted build of that library instead... @Donal, well the point is - it brings up some GUI window. If not X11, I get a jumping Python rocket icon in the Dock followed by a window titled 'tk'. This is not desired for an app that has got nothing to do with GUI. I suspected that might happen anyway. Didn't claim it was an answer after all. You can use the tcl interpreter without loading Tk like this: import Tkinter tcl = Tkinter.Tcl() result = tcl.eval(''' puts hello, world ''') With a little experimentation I discovered you can do something like this to create a tcl interpreter, register a python command, and call it from Tcl: import Tkinter # create the tcl interpreter tcl = Tkinter.Tcl() # define a python function def pycommand(*args): print "pycommand args:", ", ".join(args) # register it as a tcl command: tcl_command_name = "pycommand" python_function = pycommand cmd = tcl.createcommand(tcl_command_name, python_function) # call it, and print the results: result = tcl.eval("pycommand one two three") print "tcl result:", result When I run the above code I get: $ python2.5 /tmp/example.py pycommand args: one, two, three tcl result: None Great! Also - both the arguments and return value of pycommand happens to be strings, which is obvious as Tcl represents values as strings. If you return a list ([1, "foo"]), for instance, from pycommand ... that would be converted to string when tcl.eval returns. The "None" that is being printed in your example is actually of string type (as shown by type(result)). I just realized that code blocks ({...}) that are passed as mere strings to Python functions can simply be evaled again. tcl.setvar can be used for populating the variables to be used by the Tcl code .. though I don't know how dotted values (${configure.universal_archs}) can be passed on. Easily, the dot is not really special in Tcl variable names (it only is special to $ evaluation but not to [set]). @Brian - I had to experiment in order to get the right result from Tkinter import Tcl tcl = Tcl() result = tcl.eval(' puts "hello, world" ') Note the placement of the single and double quotes. This gave me the expected output: hello, world Any other combinations of single or double quotes resulted in the following traceback: File "<stdin>", line 1, in <module> _tkinter.TclError: can not find channel named "hello," --- fracjackmac What point are you trying to make? That if you give invalid Tcl code you get an error, or are you asking why you get that error?
STACK_EXCHANGE
In layman term, it is an inner function defined inside a function. There is few reasons why sometimes we need to use nested functions. You use inner functions to protect them from everything happening outside of the function, meaning that they are hidden from the global scope. def outer(num1): def inner_increment(num1): # Hidden from outer code return num1 + 1 num2 = inner_increment(num1) print(num1, num2) inner_increment(10) # outer(10) When we try to execute the above codes, it throws error, name ‘inner_increment’ is not defined. Now, try again by commenting the line of code, inner_increment(10) and uncomment the line of code, outer(10), then execute the codes. It returns us a result with two values because the print statement has 2 values. We cannot access to the inner function (nested function) when we tried to call inner_increment() because it is hidden from the global scope. By calling the outer function, outer() and pass in an argument, it When we try to execute this code by calling the raise_val(), we do not need to repeatedly write the codes twice. #function call square = raise_val(2) cube = raise_val(3) print(square(2), cube(4)) #output #4 64 I have a question before proceed, how does the line of code works? While n value (argument) for function raise_val() is 2 and 3 respectively. the variable square and cube pass an argument too. –Keeping it DRY Maybe, you have a function that performs the same chunk of code in numerous places. DRY means “don’t repeat yourself”. In an example I found online, you might write a function that processes a file, and you want to accept either an open file object or a file name. The code looks like, def process(file_name): def do_stuff(file_process): for line in file_process: print(line) if isinstance(file_name, str): with open(file_name, 'r') as f: do_stuff(f) else: do_stuff(file_name) # Define three_shouts def three_shouts(word1, word2, word3): """Returns a tuple of strings concatenated with '!!!'.""" # Define inner def inner(word): """Returns a string concatenated with '!!!'.""" return word + '!!!' # Return a tuple of strings return (inner(word1), inner(word2), inner(word3)) # Call three_shouts() and print print(three_shouts('a', 'b', 'c')) #Output returns a tuple of 3 elements #('a!!!', 'b!!!', 'c!!!') Remember that assigning names will only create or change local names, unless they are declared in global or nonlocal statements using keywords, global or nonlocal. def outer(): n = 1 def inner(): nonlocal n n = 2 print(n) inner() print(n) The above code alters the value of n in the enclosing scope. When outer() function is called, the n = 1 has changes its value by inner() function using the keyword, nonlocal n. Therefore, both print statements return 2.
OPCFW_CODE
Connecting from DbVisualizer to Dremio Cloud You can use DbVisualizer to query and visualize data by means of Dremio Cloud. You can use any version of DbVisualizer, as long as you use Dremio JDBC Driver 14.0.0 or later. Supported Authentication Methods There are two methods of authenticating that you can choose from when you connect from DbVisualizer to Dremio Cloud: - Use Microsoft Azure Active Directory as an enterprise identity provider To configure Microsoft Azure Active Directory, see Microsoft Azure Active Directory. You can use Microsoft authentication only if the admin for your Dremio Cloud organization has enabled it. - Use a personal access token (PAT) obtained from Dremio Cloud To create a PAT, follow the steps in the section Creating a Token. - Download the Dremio JDBC driver. - If you do not want to connect to the default project in your Dremio organization, copy the ID of the Dremio Cloud project that you want to connect to. See Obtaining the ID of a Project for the steps. After you obtain it, save it somewhere that you can retrieve it from during the procedure. Adding Dremio’s JDBC Driver to DbVisualizer’s Driver Manager - Launch DbVisualizer. - Select Tools > Driver Manager. - In the Driver Name list of the Driver Manager dialog, select Dremio. - Click the folder icon to find and select the downloaded Dremio JDBC driver. - Close the Driver Manager dialog. Creating a Connection Select Database > Create Database Connection. In the Use Connection Wizard? dialog, click No Wizard. Name the connection. Ensure that these default values are set: Field Value Settings Format Server Info Database Type Auto Detect (Dremio) Driver Dremio Connection Type Direct In the Database Server field, specify In the Database Port field, specify In the Database Userid and Database Password fields, specify your authentication credentials: If you want to authenticate by using a Microsoft account and password, and Microsoft Azure Active Directory is configured as an enterprise identity provider for Dremio Cloud, specify the username and password for the account. If you want to authenticate by using a personal access token, specify these values: - In the Username field, type - In the Password field, paste your personal access token. - In the Username field, type Click the plus sign to add a new parameter. Name the parameter truefor the value of this parameter. If you do not want to connect to the default project in your organization, follow these steps: a. Click the plus sign to add a new parameter. b. Name the parameter c. In the Valuefield, paste the ID of the project that you want to connect to. If the connection works, DbVisualizer displays a message as shown below (the reported version numbers might differ): Dremio Server 20.0.0-202112201840340507-df2e9b7c Dremio JDBC Driver 19.1.0-202111160130570172-0ee00450 You can now expand your Dremio connection to see a list of the spaces and data sources that are in the project.
OPCFW_CODE
Why does my solution to this nodal analysis problem not work? Observe the following schematic, in which I am trying to calculate the value of \$V_1\$: Let \$i_1\$ be the current in the left hand loop. Then \$i_1 = v_1 \div 50 \$ and also \$i_1 = (600 - v_1) \div 10\$. Setting these two equations equal yields \$v_1 = 500\$. What did I do wrong? The given answer is 583.3 V for \$ v_1 \$. You should express the voltage $v_2$ as it matters for determining $i_1$. Voltage $v_2$ depends on the 15-A source and the current derived by the 30-ohm resistor. I get their answer really easy. Just convert 15 A and 30 Ohm (Norton) to its 450 V and 30 Ohm Thevenin. This means 1050 V total supply across a 40 Ohm + 50 Ohm divider pair. Or 1050 * 50 / (50 + 40) which gives their value quickly. So this sanity check says they are right. Another way is to just look at the current in the voltage loop, lose the current source for a moment, and find (600/90) as the current. Then add 1/3rd of 15 A (because that is how it divides over the two branches) and add it. That times 50 ohms gives it: ((600/90)+5)*50. Same answer. @VerbalKint but why isn't the voltage drop across the 10-ohm resistor just 600 - v1? And I should be able to determine i1 from that. That is what I don't understand. why does v2 matter for determining i1. The voltage of the 100 ohm resistor is 600 + v2 - v1 @Krauss I assume you mean the 10-ohm resistor. Why is that? My book says if we have a voltage source between a nonreference node and reference node we just take the nonreference node to be the value of the voltage source. Does this not apply if there are other elements like resistors in series with the source? Why/why not? The problem is asking for nodal analysis and you are making loop analysis. Sum the currents entering to node v2. also i1=(600−v1)÷10 nope. Check where the reference/ground is, and where the voltage source is referenced to. @xormapmap The 600V power source is not connected to reference @RohatKılıç and Krauss (multiple tags don't work sorry) but these two comments did more to further my understanding of this in the space of 5 seconds than I have gotten after hours of my own frustration. Thank you. I feel stupid now. Your mistake is that you assumed that right side of 600 V voltage source (v2) is at 0 V, which is false. Correct equation is i2 = (v2 + 600 V - v1) / 10 ohm.
STACK_EXCHANGE
I have not seen the Rootsweb site builder. Can you copy and paste the page where you find this into a reply so I can see where this happens. I use www.rootsweb.ancestry.com and have shared my family tree gedcome. I also use World Connect when looking for ancestors' records. Is the Rootsweb site builder something new? Hope you can share more, unless someone else already knows what you are talking about and will respond. but a friend wants to use the free site and was given space at Rootsweb.ancestry.com to place their website. But you have to access it from the freepages site and use their method. He has asked me to be the webmistress and I have seen the wonderful pages others have created there. I am just having a really hard time using their site. Are you sure you have to use their site builder? I think it's just there for those who need/want to use it. You should be able to create the site with whatever software you like and then ftp to rootsweb. At least, that's how the homepage area works, and I thought the free page area was similar. But admittedly, I haven't puttered in a couple of years. I don't know..you may be right. I don't have a website builder..I use the one intragrated with the hosting company, Web.com My daughter uses Yahoo but you down load their builder to your computer and then upload the site. She likes it but the downside is she can only use it from her computer, whereas I can use any computer since the builder is on the web. Can you suggest a good website builder software? I tried one but the antivirus wouldn't allow it so I deleted it. I use Arachnophilia -- It's careware, a form of freeware in which the authur basically says, hey, I'm doing something nice for you (the software), you be nice to other folk in return, a concept that I appreciate. I haven't used the most recent versions 'cause I got sick a couple of years ago and I'm just getting back into the swing of things, but I used it regularly between 1998 and 2004, and it was easy to learn. One of my goals for this winter is to try to go back and update pages I haven't touched in five years -- it should be interesting. Sherry, I have a site on freepages at Rootsweb and I don't use a site builder. I just created my site as I would normally (by hand and using an html editor called ACE HTML), then uploaded it via WS_FTP. You can use any ftp program of course. I googled this and found it on CNET. The reviews said it crashed a lot, has a MUST install toolbar which is a trojan. What sort of luck have you had with it? I guess good since you are still using it. I suppose the better question would be, how long have you been using it? I just obtained a copy of Microsoft's Expressions. I hear it is good but hard to master. Anyone familiar with that program?
OPCFW_CODE
Augmenting Well Production Analysis with Subsurface Data The Montney Formation, located in the Western Canadian Sedimentary Basin, is developed in a multi-zone stack throughout the fairway. Unfortunately, these refined target zones are not captured in public data. For analysis, it is important to differentiate Montney Wells into multiple target zones because they vary significantly in reservoir properties both vertically and laterally. Identifying the target zone based on sequence stratigraphy is a valuable process but can be time-consuming. In this blog we show a quick method to differentiate target zones using a depth-based approach that is helpful when you have limited time and resources. This workflow can be applied to any map-based data to derive a data set suitable for well production analysis. For contour-based geologic data to be useful for well production analysis, we need a map-derived value for each individual well. To accomplish this, we started with a publicly available Geologic map of the Montney Top in Meters Subsea (BC OGC, 2012). Figure 1: Digitized and interpolated Montney Top Subsea TVD (True Vertical Depth) Structure Map, (OGC, 2012). The Montney Top Structure map contours were digitized (Figure 1), so that interpolated well-values could be derived. Using point-sampling, the Montney Top Depth was extracted at the intersection of the Bottom Hole locations. The benefit of this method is that you can derive values for any well within the map area and the point-sampling process can be re-run as new wells are drilled within the map area. The resulting dataset consists of a Top Depth for every Well Bottom Hole Location and can then be used to calculate other attributes. In this example workflow, the Formation Top Depth and Well Bottom Hole Depth were used to calculate Depth Below Formation Top at Bottom Hole locations for a subset of Montney Horizontal wells in British Columbia. The Formation Top Depth and Depth Below Formation Top were imported into VERDAZO as User Defined Attributes (UDA) and used to group wells into bins based on the Montney zone they were targeting (Figures 2 & 3). Figure 2: Schematic diagrams showing the map view, dip section, and strike section of a pad targeting the Montney Formation. To illustrate the targeted zones, Figure 2 shows the Depth Below Formation Top well groups. The target zone definitions will vary depending on geographic location within the Montney Fairway. In this specific area, we grouped 3 target zones based on the Depth Below Formation Top. Grouping the wells based on Depth Below Formation Top allows us to quickly differentiate target zones and compare their production from multiple perspectives (Figure 3). The Type Well Curve Chart shows the average gas peak rate is highest in the shallowest target zone and decreases with depth. When comparing the First 12-Months Cumulative Gas Production per 100m Completed Length in a cumulative probability probit plot, it shows the gas production decreases with depth (Probit Mean: 250 to 112 mcf/day/100m). Additionally, the P10:P90 ratios indicate the Target Zone wells have similar statistical variability (P10/P90: 3.7 to 3.9). Figure 3: Production comparison using depth-based Target Zones (IHS Datahub – VERDAZO, March 2019). Without differentiating the target zones, we would not have been able to compare and contrast the production changes seen throughout the multi-zone stack. Bringing in a single piece of subsurface data, a Montney Top Structure map, substantially improved our production analysis and insights. This workflow can be used to incorporate any map-based geoscience data. Instead of overlaying production bubbles on maps to visually look for trends, extract well information from the map to provide a more detailed analysis. Contact email@example.com if you would like to learn more about integrating subsurface data into your analyses. Public Map: BC Oil & Gas Commission Open Data Portal, Mark Hayes, 2012, Structure Top Montney, ftp://ftp.bcogc.ca/outgoing/OGC_Data/Geology_and_Engineering/montney_play_atlas_maps/. Production data: IHS Information Hub Completion data: geoLogic WCFD Production Analysis: VERDAZO GIS Tool: QGIS
OPCFW_CODE
Microsoft Office Tutorials and References In Depth Information Alternate Access Mapping When you initially install and start using SharePoint, accessing it by using the NetBIOS name of the server works fine, but what if you want your users to be able to access it the same way they do other Web sites or be able to access it from the Internet? You can’t resolve that server name among all the other machine names on the Internet, so you need it to resolve to a DNS name. Alternate Access Mapping (AAM) is about mapping a SharePoint web application to an alternate address other than the default. This way, you can have an internal, default name of http://spf2 and an Internet URL of http://SharePoint.dem0tek.com, both pointing to the same server (and more importantly, to the same content). AAM specifies alternate access to a web application by internal URLs, public URLs, and zones. An internal URL is what the web application responds to. A public URL is what the web application returns to the user in the address bar and in the links for all search results. Web applications can have five public URLs associated with it (at which point they are called zones ). So, you can have a Default zone (that’s the default URL for the web application, which is the root path for all the site collections it might contain), an Intranet zone, Internet zone, Extranet zone, and a Custom zone. There is also another use for AAM—extending web applications. An extended web application is just an IIS Web Site that points to the same content database as an existing web application. This is done if you want to use some other URL, security settings, or authentication type to access the same data (essentially if you want to use different IIS Web Site settings to access the same content, like anonymous or Kerberos). That way, users can have more than one way to access the same data, especially if you want to have different types of authentication for the content, depending on what URL the user uses. Because an extended web application is just sharing the same content database as an existing web application, it is considered just another URL used to access the first web application’s content. This is why an extended web application is not given its own name in the web application list but is considered a zone of the existing web application. In that case, one of the public URL zones is taken up with the URL of the extended web application. You might want to note that there are a limited number of AAM zones available to extend (Intranet, Internet, Custom, Extranet) per web application. The Default zone is the original web application’s URL, so obviously that is not available to be used for extending. So, when planning your URL structure and how users are going to access SharePoint, keep AAM in mind. When planning for SharePoint, it’s a good idea to keep in mind how you would like to structure your site collections. Site collections are composed of a top-level site and all the sites that stem from it (called subsites ). The top-level site is usually accessed by using the web application’s URL and then the path to the top-level site’s home page. When creating a site collection, you must decide what its URL path will be. When you create your first site collection in a web application, you can give it the root address for that web application, or you can specify a path. What this means is if you create the first web application on server SPF2, then its URL can be http://SPF2, using port 80, which is the root address for the URL. But if you create a second site collection in that web application, it needs to have a different path, because it can’t use the same URL. This is where managed paths comes in. By default SharePoint has a sites wildcard managed path for
OPCFW_CODE
M: The rise of Python in computational science - TriinT http://www.walkingrandomly.com/?p=2062 R: danbmil99 > Can we stop promote imperative-ish languages and focus on where lambda can > take us? nope, and why would we? For many applications, including the majority of scientific research, FP is a non-starter. You need a simple, robust, procedural language that thinks the way most of our brains are wired. There is no reason or payoff to wrapping your mind around an arcane style of programming when the software project is not going to need the benefits of that style of development. R: xtho How can you say FP was a non-starter? Just because some aging environments were designed by Fortran hackers? FP is better suited for parallelization. And where would you put R? R: cdavid (disclaimer, I am a Numpy/scipy dev) I think the language is just part of it. I would go as far as saying that from a purely "technical" POV, lisp and ML (e.g. OCAML) are superior languages to python in almost every way. Nevertheless, I think those are, at least today, not as good choices as python for scientific research, for at least two reasons. First, I strongly believe scientific work in general needs to be more open, both publication-wise and implementation-wise. I see programming languages as a communication tool as much as an implementation tool for science, and I think python fills this very well, because it is very readable to the casual programmer. LISP is too foreign for this audience. P. Norvig mentioned about LISP relative failure for non CS scientists this at his talk for scipy09 (<http://www.archive.org/details/scipy09_day1_03-Peter_Norvig>) - and I think you can count him as a LISP fan. To quote his talk, at some point, "you have to stop fighting reality". Secondly, there is something about the LISPs, haskell and OCAML communities which do not pave well compared to python. Those are very CS-savies, really oriented toward programming - which is fine. Different goals, different tools and all that. Also, do not forget that in science, projects generally last much longer than in most other areas. That's one of the reason why Fortran is still so pervasive, after all. It is also maintained by different people (grad students, etc...), involving several generations in some cases. Having a relatively mainstream language is a requirements - python is already too weird in many cases... Concerning parallelization: even if you assertion were true, that's a concern only for a tiny proportion of what people do. Speed really does not matter most of the time (but it is true that when it does, it often does it in a significant way - high energy physics, climate modellization, etc...). I would be cautious about FP being better for parallelization for scientific work, though: a lot of tasks can be solved using MPI, etc... and correct me if I am wrong, but I don't believe FP brings a lot of advantages there. Recently, a perspective from W. Stein, who started the SAGE project, was mentioned on LtU, and the article provides more insights, coming from a quite different background: <http://lambda-the-ultimate.org/node/3712> R: graphene It's interesting that in Norvig's talk you linked, while praising python, he also mentions that he is a supporter of the functional programming style for scientific work, both because it more transparently mirrors the mathematical formulation of the algorithm, and because it should make the use of massively parallel computations easier. I concede that there are currently few implementations of massively parallel, functional code (MapReduce and possibly Data Parallel Haskell come to mind) and it's true that the MPI-based Single Program, Multiple Data paradigm is dominant for now, the functional paradigm being but a very promising newcomer. Given that, and the fact that languages such as Lisp, Haskell and OCaml are more aimed at the functional paradigm, wouldn't you agree that there is possibly a general shift going on in scientific programming in general, from Fortran/C(++) imperative style, via python, towards finally any of Haskell, Lisp and/or OCaml? I don't really agree with your implication that the (at first) obscure way of doing things functionally makes the work less open, and that this makes it less desirable. If everyone worked in exactly the same way, communication would be easier, but there would never be any disruptive change. I think letting everyone figure out for themselves what language works best for his/her appliciation has more merits than trying to enforce a standard, be it python + numpy & scipy, or anything else. If there turns out to be one clear winner, people will gravitate to that automatically, but you shouldn't argue for harmonization for its own sake. R: chancho > and because it should make the use of massively parallel computations > easier. I concede that there are currently few implementations of massively > parallel, functional code (MapReduce and possibly Data Parallel Haskell come > to mind) Not only are there few, their numbers are growing much more slowly than traditional C/Fortan/MPI environments. The "massive" in massively parallel is getting more and more massive every year. Running computations at full speed at these scales, 100K+ cores in multiple levels of hierarchy (same die, same board, same blade, same switch, etc) is a hugely mundane architectural problem. Not only is there no architecture besides MPI which has nearly as much effort invested in this problem, none are currently making the investment, so none have a chance to catch up within the next decade. The only way you will see FP doing massive parallelism at speeds even close to current best is if they wrap the C MPI interface, and I have yet to see any functional message-passing APIs which aren't tarted-up imperative environments. (If anyone knows any please share.) At it's core, message passing programming involves managing data, deciding where it resides and where it needs to go, which is kind of antithetical to FP. You mention MapReduce, which I think is more of a data-center thing subtly but fundamentally different from HPC, involving less computation and more I/O. FP has a good shot there (cf. Erlang) but not much chance at cracking the HPC market. R: graphene _The only way you will see FP doing massive parallelism at speeds even close to current best is if they wrap the C MPI interface, and I have yet to see any functional message-passing APIs which aren't tarted-up imperative environments_ True, I would not be surprised if it evolved that way, but according to my (modest) knowlegde, it is possible for imperative routines to have a purely functional interface. The extreme case of course is the fact that any functional high-level code needs to be translated to assembly code to run at all, but I don't see why that boundary can't be on a higher level; As long as the developer programs functionally (and derives the benefits from that), does it matter that his code is being translated first into imperative MPI-ified C, and then assembly? This is all provided, of course, that you can actually leverage the parallelism-related benefits of FP this way, which I guess is not known as of yet. You mentioned "functional message-passing APIs which [are] tarted-up imperative environments", care to give an example? I'm intrigued... As for MapReduce, of course it's different from what the average HPC machine is used for, but being an embarrassingly parallel problem, it's one of the first problems where the FP approach has been shown to work. In other (eg typical HPC) applications, there's lots of work ahead in rethinking the algorithms so they can be expressed functionally, before you can even begin contemplating how to use the hardware. R: morphir why is python preferred above scheme or any other lisp, like clojure? Can we stop promote imperative-ish languages and focus on where lambda can take us? R: ubernostrum Because Python mixes: * Easy to learn (including a REPL), * Multi-paradigm programming (doesn't shove a One True Way To Program(TM) down your throat), and, most important of all, * Solid, battle-tested libraries for the domain (SciPy/NumPy/Sage/etc.) and an easy-to-use interface to/from C. R: hyperbovine Also, the code is extremely readable. I can't think of another language that communicates the programmer's intent better than Python. People often write code aesthetics off, but it really matters. When your audience includes not just fellow programmers, but also potential collaborators, who may or may not know Python, as well as your adviser, who knows only FORTRAN, having a language that self-documents is helpful. R: xtho The French call it déformation professionelle, which works wonders when it comes to perceiving the weirdest stuff as perfectly normal.
HACKER_NEWS
There are several books available for preparing the certification exams: - Michael Jang‘s RHCSA/RHCE Red Hat Linux Certification Study Guide Sixth Edition is certainly one of the best available books and perhaps the best one. It describes the exam objectives and explains all the topics one by one. Furthermore, it provides lots of exercises and two sample exams for each RHCSA & RHCE exam allowing you to test your new skills. Some chapters like virtualization (see this site) or encrypted file system (LUKS) are not very clear for a beginner and should be rewritten. - Damian Tommasino‘s RHCSA and RHCE Cert Guide and Lab Manual is an interesting book. Its purpose is to help you to pass RHCSA & RHCE exams and to give you advanced level in some configuration domains (RAID, quotas, NIC bonding, etc). As the scope of the two exams is already very broad, it’s perhaps too ambitious and doesn’t help you to focus on the really needed skills. - Asghar Ghori‘s Red Hat Certified System Administrator & Engineer is a book written by a system administrator for system administrators. It goes straight to the point but doesn’t go into the details. With this book, you will learn basic skills to pass the exams but don’t expect too many explanations. Some parts like Kerberos are hardly covered and it can be dry on some topics. Books about CentOS 6 or RHEL 6 that are also interesting: - Jonathan Hobson‘s CentOS 6 Linux Server Cookbook addresses many RHCE subjects (YUM, DNS, Samba, Apache, FTP, mail, etc) and provides advanced recipes on some subjects like NTP and network configurations. - Sander van Vugt‘s Red Hat Enterprise Linux 6 Administration tackles several RHCE topics (KVM, YUM, DNS, Samba, Apache, FTP, mail, etc) but is mainly interesting because it explains in details how to set up an OpenLDAP server, which is indirectly a needed skill for the RHCSA exam (how to test a LDAP client configuration without building a LDAP server?). - Sven Vermeulen‘s SELinux System Administration is the best current resource about SELinux. For the LPIC-1 exam, there is a LPIC Quick Reference Guide written by Daniele Raffo and regularly updated. If you are preparing for the LPIC-2 exam, the free LPIC-2 Exam Prep is for you. If you are interested in RHEL6/Windows integration, Integrating Red Hat Enterprise Linux 6 with Active Directory from Mark Heslin is a must-read. In this free whitepaper, you will find various scenarios explained. If you want to learn the Bash shell, the Advanced Bash-Scripting Guide is for you. Finally, you can look at this directory of free programming books.
OPCFW_CODE
Learning uml models This makes your journey the experiences of our global senior faculty help you a lot. But has forum in application development process goes without deploying it can in. What questions for question paper will help section for? Which is the default for synthesized properties? Peerbits is an excellent development agency. Do not a second, with mobile application development solution by almost all the project manager uses http methods. Api testing is sent to a registered mark of the key to use app you: you open state transition the answers with their education and prepare for. Who cannot ignore certain personal computer networks technologies, question could handle several questions, one packet header that should answer. As with any project, a mobile device wanting to communicate can forward its packets to its neighbours, you want help we are always there for you. Which developers would you answer questions answers our mobile development process if you with each microservice are either very helpful and at least three inputs and achieve? What is interface between sanity and provided by developers work with firewalls do people play an impact of objects that answers with mobile bank of the answer this is. Typically specialize in front end goals your questions are stored details are written using this question an example, as we are passing android package onto a future. ADM, for the specific reasons for an adverse underwriting decision. What is working at monthly budgetsMailing Address Our application has forum to discuss subject related doubts and also any technical related doubtsare establishing a platform where students can post their doubts regarding subjects and they can find previously repeated unit wise questions. Keep your personal information Why is API testing considered as the most suitable form for Automation testing? Previous answer this can this testing microservices pro. First I work with the project stakeholders to define the requirements and scope of the project. QEVLAR, awards, and booted from there. Once and there is bad idea of ethics of application development with mobile bank answers ltd is conducted with. Most devices are getting smaller, representative, a banking app must have a payment option whereas a geolocation app needs a map feature. Knowledge i limit for application. Looking for more applicants? It shows that you provide better app with answers We, offer to insure you at a higher than standard rate or terminate your coverage. This section you developed on position needs a banking app will i own answers. Unlike collateral, therefore, and the next retry may succeed. In microservices, such as pictures, user doesn? Microservices is an architectural style which structures and application as a collection of loosely coupled, Valasaravakkam, the best approach to answering this is straightforward and succinctly. See the actual data is recreated after the user expectations and development with mobile bank answers on? Vanet can mobile development process is coming up a microservice shall own apis like a good tool or conditions of http is the icmp protocol? Thanks for a profession that were centralized logging strategy within an application development with mobile bank of an android apis for an emulator? Expert in your company can display a planned, we will perform functional components in an optimal return with interactive, as a week, although some files? Prepare test closure is for senior faculty help them fast as follows guidelines developed by hiring testers randomly test with mobile bank of the perfect solution is to guide product. That look like most application with There are essentially two broad categories of MAC protocols for ad hoc networks. When would you say that an app is not in a running state? Features and service from bank with mobile answers are four corners are prone to load, send email is. Usually testers report a bug when a test fails. What we recognise that the bank with performance reasons, software requirements may also contains questions. This can lead to cascading failures in the calling service due to threads being blocked in the hung remote calls. He should have seen in english questions by google maps, rest style object model itself either addressed a microservice should pick test team setting up. Similarly need game based on this answer papers, write apps where you are worldwide answer large populated classes are messages do our answers important?
OPCFW_CODE
#include "Options.hpp" #include <cxxopts.hpp> Options parseOptions(int ac, const char **av) { cxxopts::Options argparser(av[0], "Convert images to Gamebuino flash Image"); argparser.add_options() ("o, output-path", "File name of the export", cxxopts::value<std::string>()->default_value("code.hpp")) ("i, image-input", "Image path to convert", cxxopts::value<std::string>()) ("c, code-name", "identifier in the exported code", cxxopts::value<std::string>()->default_value("image")) ("transparency", "Choosen color index used for transparency. " "Value greater than 15 (0x0F) will make the program not handle transparency", cxxopts::value<uint32_t>()->default_value("255")->implicit_value("0"), "N") ("palette", "Choosen color palette used for finding correct indexes", cxxopts::value<std::string>()->default_value("default"), "default / edge16") ("palette-file", "File from which the color palette will be used (override palette option)", cxxopts::value<std::string>()) ("h, help", "Print the help", cxxopts::value<bool>()) ; argparser.add_options("Spritesheet") ("s, spritesheet", "Activate the spritesheets mode", cxxopts::value<bool>()->default_value("false")) ("tile-x", "Number of tiles on X coord | Sub width will be deduced with image-width / tile-x", cxxopts::value<uint32_t>()->default_value("0"), "N") ("tile-y", "Number of tiles on Y coord | Sub height will be deduced with image-height / tile-y", cxxopts::value<uint32_t>()->default_value("0"), "N") ("framerate", "Framerate of the animation | number of frame per animation", cxxopts::value<uint32_t>()->default_value("0"), "N") ; auto result = argparser.parse(ac, av); if (result.count("help")) { std::cout << argparser.help({"", "Spritesheet"}) << std::endl; exit(0); } else if (result.count("image-input")) { Options opt; opt.sImagePath = result["image-input"].as<std::string>(); opt.sOutputPath = result["output-path"].as<std::string>(); opt.sCodeName = result["code-name"].as<std::string>(); opt.iTransparency = result["transparency"].as<uint32_t>(); opt.bTransparency = (opt.iTransparency <= 0x0F); if (result.count("palette-file")) opt.fColorPalette = result["palette-file"].as<std::string>(); else opt.sColorPalette = result["palette"].as<std::string>(); opt.bSpritesheet = result["spritesheet"].as<bool>(); if (opt.bSpritesheet) { opt.uTilesX = result["tile-x"].as<uint32_t>(); opt.uTilesY = result["tile-y"].as<uint32_t>(); opt.uFramerate = result["framerate"].as<uint32_t>(); if (opt.uTilesY == 0 || opt.uTilesX == 0) throw cxxopts::OptionException("tile-x and tile-y must be specified if spritesheet activated and different from 0"); } return opt; } std::cerr << "Invalid Options" << std::endl; std::cerr << argparser.help({"", "Spritesheet"}) << std::endl; exit(1); }
STACK_EDU
Binding of Type&& (rvalue reference) to modifiable lvalues and Deepening my C++ knowledge, and found excellent article that explains Rvalue references http://blogs.msdn.com/b/vcblog/archive/2009/02/03/rvalue-references-c-0x-features-in-vc10-part-2.aspx (article is somewhat dated, published in 2009). In the article there are few lines of that should error out // original string&& i = modifiable_lvalue; // Line 26 string&& j = const_lvalue; // Line 27 - ERROR string&& k = modifiable_rvalue(); // Line 28 string&& l = const_rvalue(); // Line 29 - ERROR However, in Visual Studio 2012 string&& i = modifiable_lvalue; // Line 26 - ERROR also errors out !!! error C2440: 'initializing' : cannot convert from 'std::string' to 'std::string &&' According to the article modifiable rvalue reference, Type&& , is willing to bind to modifiable lvalues and modifiable rvalues Which is exactly should have happened on line 26. So why am I getting the error ?? Thanks ! EDIT: Found very interesting article pertaining to this topic: http://web.archive.org/web/20120529225102/http://cpp-next.com/archive/2009/09/move-it-with-rvalue-references/#fn:insertionsort. Future reader might find it helpful. @nosid: No, the code in the question is a verbatim copy of the code in the article. The article is wrong, an rvalue-reference cannot bind to an lvalue. Now there is the concept of reference collapsing (that does not apply here) that allows something similar in a template argument: template <typename T> void f(T&& x); The argument x can bind to either an lvalue or rvalue, but when it binds to an lvalue it is not an rvalue-reference, rather the deduced type T is U&, and the two references in U& && collapse into just the lvalue reference, so the argument is of type U& (with the rvalue-reference being dropped out). This is usually called universal reference. Again, this does not apply to the code that you quote from the article, that is incorrect. Note that the article was written in 2009, and the standard draft at the time might have the behavior described in the article. There were a few changes from 2009 to 2011 when the current standard was finally accepted. +1. I believe there was a time (= version of the standard draft) when r-value refs could bind to l-values as well. So it might simply be based on that version. So, basically non-const Type&& can bind to only modifiable rvalues. While const Type&& can bind to const and non-const rvalues. On top of that string& c = modifiable_rvalue(); is also true - i.e Type& can bind to mod. rvalue. @newprint: The last part is not true, a non-const lvalue reference can only bind to a non-const lvalue: Type& cannot bind to a non-const rvalue. The simple rules: const reference can bind const or non-const values of the same category (l/rvalue). Non-const reference can only bind non-const values of the same category. Finally the exception: const lvalue-references are special and they can bind to rvalues, but it doesn't really... a temporary variable is created from the rvalue and the const lvalue reference binds to the rvalue. @DavidRodríguez-dribeas: A const lvalue-reference can bind sometimes directly to an rvalue, a temporary may not be created. Likewise sometimes a temporary may be created to indirectly bind an rvalue-reference. See 8.5.3. @user1131467: You are right on the const-lvalue-reference. I prefer the less imprecise wording above as it makes the requirement obvious: binding a const lvalue-reference to an rvalue requires an available copy constructor (yes, the copy will be elided in 99% of the cases, but the copy constructor must be available). Regarding a temporary being created when binding an rvalue-reference to an rvalue, I am not sure about it (haven't quite dug into 8.5.3, but intuitively that would require the availability of a copy-constructor or move-constructor which I believe not to be the case) @DavidRodríguez-dribeas: "binding a const lvalue-reference to an rvalue requires an available copy constructor": Incorrect sorry. Counterexample: struct X { X(const X&) = delete; }; X&& f(); const X& x = f();. @user1131467: Confusing as it might look an rvalue-reference is an lvalue, not an rvalue. Change the declaration above to be X f(); and see the results. @DavidRodríguez-dribeas If string& c = modifiable_rvalue(); is false (i.e doesn't fall into any of the rules you presented), why Visual Studio doesn't complain ? Is it bug in the compiler or MS unfinished implementation of C++11 ? Edit: Out of curiosity, I also compiled with Intel C++ Compiler 13.0 - no errors !! @DavidRodríguez-dribeas: Dude, you're confused. The function call expression f() in my first example is an xvalue, which is also an rvalue - it is not an lvalue. The direct binding requirement also applies to class prvalues, which are also rvalues, so for example: struct X { X(const X&) = delete; }; X f(); const X& x = f();. The function call expression f() is now a class prvalue, which is an rvalue, and it is directly bound to the lvalue-reference, a temporary is not created to indirectly bind, and no copy constructor is required. @user1131467: It seems that the rules have changed quite a bit since C++03. After reading 8.5.3 in C++11 standard it is clear that you are right and that no copy constructor is required. On the other hand, in C++03 the standard did require the availability of the copy constructor. @DavidRodríguez-dribeas: Change the declaration above to be X f(); and see the results.: Here is a full example including a definition of f: struct X { X(){}; X(const X&) = delete; }; X f() { return {}; }; const X& x = f();. No copy constructor required, the reference is directly bound to the prvalue f(). Your confusion comes from the return statement, which is copy-initialization if not with braced-init-list, but has nothing to do with the reference binding. @user1131467: No, my confusion is not having read the section in C++11, as I already told you. The C++03, with which I am more familiar (if nothing else for having known it for a much longer time) requires a copy in the example you provided. The reference is directly bound to an lvalue or a temporary is (possibly) created for an rvalue, but (8.5.3/5b2): The constructor that would be used to make the copy shall be callable whether or not the copy is actually done. I have read the C++11 standard and that has changed and now the reference binds directly to the class-rvalue. @newprint: C++ compilers are allowed to compile ill-formed programs as an extension. MSVC supports binding of non-const lvalue-references to rvalues as such an extension. This is a source of controversy and complaint again Microsoft, but they are not considering changing it because it would break old Windows-specific code that depends on this extension. The article is incorrect. You need to convert it to an rvalue ref explicitly, either with a static_cast<std::string&&> or more idiomatically, with std::move(): std::string&& i = std::move( modifiable_lvalue ); This code tells the compiler (and the reader!) that you're promising to be done with modifiable_lvalue.
STACK_EXCHANGE
Looking for a zyprexa? Not a problem! Buy zyprexa online ==> http://newcenturyera.com/med/zyprexa ---- Guaranteed Worldwide Shipping Discreet Package Low Prices 24/7/365 Customer Support 100% Satisfaction Guaranteed. Tags: #zyprexa buy online zyprexa purchase can i buy zyprexa purchase zyprexa order windsor zyprexa prescription order zyprexa cod no prescription zyprexa 2 cheap zyprexa prices buy in online zyprexa discounts zyprexa cheap at washington 5mg cost vabbian zyprexa caracas low cost zyprexa rotherham no prescription required zyprexa no perscription zyprexa fedex delivery online buy cheap generic zyprexa low cost zyprexa shop pharmaceutical buy zyprexa fast delivery 4mz95 discount purchase zyprexa zyprexa buy infant bestellen dexa cost zyprexa cheapest pharmaceutical pet zyprexa discount lower price find zyprexa ach delivery overnight otc zyprexa discounts pharmaceutical delivery best price buying zyprexa buy buy cheap zyprexa 9javu zyprexa purchase cheap zyprexa can i buy zyprexa generic can i order zyprexa online purchase zyprexa online seho1p customers can buy zyprexa from low cost zyprexa otc no prescription zyprexa without denver buy zyprexa best price zyprexa diners club cost of generic zyprexa discount zyprexa cod sat delivery krl6x get zyprexa no rx mastercard discount zyprexa buy discrete price zyprexa tts-1 zalasta 15mg buy zyprexa using zyprexa zyprexa no rx express shipping delivery zyprexa 2 order zyprexa saturday delivery buy in online zyprexa discount zyprexa price overnight no doctors without prescription zyprexa discount zyprexa 5mg cod accepted louisiana http://availablemeds.top/zyprexa no script zyprexa discount zyprexa cod ov buy zyprexa price drug purchase zyprexa online store ih1jg buy internet zyprexa buy rx zyprexa without xqqb1 discount zyprexa prescription drug pharmacy zyprexa jcb no rx cheap zyprexa american express amex zyprexa cheap fed ex delivery That means for those who have the flu, a headache, earache, or anything of similar nature that can most likely disappear in just a couple days, save time before going. Once the ideal spot is set, think outside the box. As with many other careers, a pharmacy tech's geographic location could affect his or her earnings. Limited awareness and deficiency of oversight among doctors, pharmacists and also the patients might also contribute for the problem. An Internet pharmacy is simply a web-based store; a website that sells medicines on the web. It's no secret that healthcare costs are beyond control. Shop and compare the prices offered when compared to your neighborhood pharmacy. While generally it's best to stick with one, you may find yourself having accounts at multiple stores depending on your own different needs. The unsolicited email within this image includes a link with a pharmacy website. Most patients were paying good money for their insurance and were often upset when their prescriptions required a $50 co-pay, or that they not met their $4,000 deductable and must give the entire cost out-of-pocket. Screaming, yelling, demanding things, or becoming rude will not help the situation. According for the Bureau of Labor Statistics (BLS), pharmacists execute a lot greater than provide prescriptions to patients. Identifying causal factors that increase mortality in breast cancers patients and cause relapse could help, according to your Canadian pharmacy. They strategized making use of their new venture by coming up which has a unique name and symbol. Researchers could identify maximum benefits for smaller, more definite population samples after elaborate data collection efforts from five different sources were analyzed.
OPCFW_CODE
An intial look at UKOER without the collections strand (C). This is a post in the UKOER 2 technical synthesis series. [These posts should be regarded as drafts for comment until I remove this note] In my earlier post in this series on the collections strand (C), I presented a graph of the technical choices made just by that part of the programme looking at the issue of gathering static and dynamic collections, as part of that process I realised that, although the collections strand reflects a key aspect of the programme, and part of the direction future I hope future ukoer work is going, a consideration of the programme omitting the technical choices of strand C might be of interest. The below graphs are also the ones which compare most directly with the work of UKOER 1 which didn’t have an strand focused on aggregation. I’m hesitant to over-analyse these graphs and think there’s a definite need to consider the programme as a whole but will admit, that a few things about these graphs give me pause for thought. - wordpress as a platform vanishes - rss and oai-pmh see equal use - the proportional of use of repositories increases a fair bit (when we consider that a number of the other platfoms are being used in conjunction with a repository) now in a sense, the above graphs fit exactly with the observation at the end of UKOER that projects used whatever tools they had readily available. However, compared to the earlier programme it feels like there are fewer outliers – the innovative and alternative technical approaches the projects used and which either struggled or shone. Speculating on this it might be because institutions are seeking to engage with OER release as part of their core business and so are using tools they already have, it might be that most of the technically innovative bids ended up opting to go for strand C, or I could be underselling how much technical innovation is happening around core institutional technology (for example ALTO’s layering of a web cms on top of a repository). To be honest I can’t tell if I think this trend to stable technical choices is good or not. Embedded is good but my worry is that there’s a certain inertia around institutional systems which are very focused on collecting content (or worse just collecting metadata) and which may lose sight of why we’re all so interested in in openly licensed resources (See Amber Thomas’ OER Turn and comments for a much fuller discussion of why fund content release and related issues; for reference I think open content is good in itself but is only part of what the UKOER programmes have been about). - the projects have been engaged in substantive innovative work in other areas, my comments are purely about techincal approaches to do with managing and sharing OER. - when comparing these figures to UKOER graphs it’s important to remember the programmes had different numbers of projects and different foci; a direct comparison of the data would need a more careful consideration than comparing the graphs I’ve published.
OPCFW_CODE
By Scott D. Button, Joshua W. Brown, Tek D. Kim and Thomas E. Sherer Download full paper Cognitive bias unfavorably affects how we make decisions under uncertainty. This is exacerbated with the increase in scale and complexity of systems that are affected by those decisions. Methods are needed to provide decision analysis support for program managers and other engineers faced with allocating scarce resources for large-scale, complex product development. The approach described in this summary extracts a life-cycle thread from an overall product development network and uses Monte-Carlo analysis to simulate options for resource allocation. By doing this, we can then describe the trade-offs between the resource allocation choices. Expected benefits are reductions in the cost and the elapsed time from beginning to the end of programs that contain significant product development. An explanatory description of this methodology is described in the full paper available on Boeing.com. This approach starts with Lean+ Systems Integration Management (LSIM), a set of tools and methods demonstrated to increase productivity and improve affordability of product development. The principle objective of LSIM is to perform the right work in the right order to reduce unplanned and out-of-sequence rework. This model-based approach improves the quality of deliverables, increases throughput, facilitates cross-functional integration, and provides management a predictive model with regular feedback on focus areas requiring their attention. In 2015, we started developing life-cycle threads from our LSIM integration models. The context of a thread is the life-cycle maturity of information for a few functional groups—for example, how the Flight Controls team and the Stability and Control team interact to mature information from concept to deliverable. These thread features enable the use of statistical clustering to extract a life-cycle pattern of products and inputs, and to apply this life-cycle pattern to other products to establish similar patterns of inputs, without manually selecting these inputs. This has the potential to significantly speed up our modeling process. In 2016, we enhanced these features to provide network diagram plots. We created the ability to add to a thread from a tree view of the product hierarchy, produce a network diagram with nodes and edges colored according to the responsible organization, and populate the nodes with additional helpful information. In 2017, we added a Monte-Carlo analysis of these threads. This new approach enables functional managers to understand the expected completion distribution of their function, in the context of the overall program. To conduct a Monte-Carlo simulation, a sample is drawn from a defined distribution representing each task. Each sample from the defined distribution constitutes a trial, and 100-1,000,000 trials are typically evaluated to determine the results. The mainstream approach to simulation of project duration is by assuming a shape for the task distributions, then aggregating the tasks according to the project network to form a project completion distribution. For this Monte-Carlo analysis, we selected the log normal distribution to use as the shape of the distribution for task duration. We favor the log normal distribution as it only requires our program sources to estimate the average duration and validate our assumption that the safe duration should be two standard deviations longer than the average duration. Figure 2 illustrates a network thread for the 787 semi-levered landing gear. This is a thread excerpted from a larger, airplane-level integration model. The curve above the first two tasks in the thread represents the log-normal distribution of durations for each task. For each task in the network, a random sample is taken from the probability density function representing that task. A finish time is computed for each task, which is the sum of the sampled task duration and the finish time of the latest predecessor to that task. The finish time of the terminal product in the network represents the finish time for the thread, for one run of the Monte-Carlo analysis. This is repeated numerous times (1,000 to 100,000) to form the completion distribution of the thread. Figure 5 shows the histogram for the completion distribution of the terminal product in the thread. Note the completion distribution shifts towards a more normal distribution, due to effects predicted by the Central Limit Theorem. The network diagram is shown as Figure 8: Landing Gear Network Diagram. The upper number is the uniform resource identifier, the lower number is the duration in days. The three tasks involved in the trade-off are indicated with an ellipse. We find it rather interesting that by moving resources from 1.3.1 to 1.3.2 in this example, the project duration is reduced approximately the same amount as if you doubled the resources on 1.3.2 in isolation. This is due to sufficient slack in the non-critical path feed from 1.3.1 downstream. Although this effect is somewhat obvious for this simple network, the technique will provide non-obvious results for more complex networks. This shows that Monte Carlo analysis methods are useful to demonstrate the trade-offs between time and resources for life-cycle threads in a commercial aircraft program. And it allows program managers and engineers to understand the role of variation and uncertainty in large-scale, complex, life-cycle product patterns. This understanding is central to meeting the flow time and cost improvement necessary to remain competitive.
OPCFW_CODE
The problem with making a scenario of a country and saving it in .scn is you burn some newGRFs into the scenario; if you want to change newGRFs, you have to do the scenario all over again. Well, the only alternative to prepare agnostic maps would be the heightmap system. So far, heightmaps have been used as a foundation to create realistic (or not) maps, by making everything except terrain shaping by hand. Some people have tried to create "heightmap bumps" or marks for the city locations. I've decided to take this one step further. In the beginning of the efforts,wWhat I have done is collect the coordinates and population figures of 152 of the most important Portuguese cities & towns, one by one, searching in google. I then created a .txt file ready to be imported into microDEM via GPS->WayPoints; Read From File. (for those of you who don't know, MicroDEM is a program that opens and edits heightmaps). MicroDEM would import the city coordinates as waypoints, which could then be "painted" (in the correct place, of course) into the heightmap with a symbol of your choice. This way, one could create a heightmap WITH terrain symbols on the places the cities should be. Regarding the populations, I collected the populations of all the 152 towns & cities (again, one by one). Next, I categorized the cities according to population ranges (5000 to 10000 people, 10000 to 15000, etc). This way, I could easily scale the Portuguese towns. All I had to do was choose which OpenTTD population the biggest city would have (in my case, Lisbon). I did a "template conversion" so that Lisbon had 10,000 ingame population. It was a good number to scale up/down mentally; if I wanted a smaller Lisbon, say 5,000, I'd just divide all the other pre-calculated population ranges by 2. A better method Moki found an even better way to do this. He found a method that no longer requires the search of city locations & populations one by one. Here he explains it step by step: The city import patch - importing cities directly instead of creating heightmap referencesMoki wrote:That's how it's done: 1. Download and merge the DEMs for the whole area 2. Download the proper location files from http://earth-info.nga.mil/gns/html/namefiles.htm 3. Import the txt into Open Office Calc. This is done by open, changing the file type to Text CSV and opening the file. The settings should be changed to "Character Set = UTF-8" and "separated by Tab". This and the following steps can probably be done with Microsoft Excel too, but I have no idea, how that works. 4. Data -> Filter -> Standard Filter ... filter by the PC column (that's the importance value/size). For my map, I used PC <= 3, so all cities with values 1-3 are used. This works well for whole country maps. For a regional map, you might want to use PC <= 4 or 5 and for an international map PC <= 2. If you want to use all towns (even the smallest ones without any rating), filter the FC column for P instead. This leaves all "populated places" and removes all the rivers, forests, etc. that you probably don't want to label on your map. 5. Now delete all columns that aren't needed. You need LAT (latitude) LONG (longitude) and FULL_NAME_RG (name). The PC column can also be deleted now as we don't need it anymore. 6. Rearrange the columns in the following order: LAT, LONG, NAME. MicroDEM needs this format or it won't work. 7. Press ctrl-a -> ctrl-v (copy everything) 8. Open the text editor of your choice (notepad or similar), make a new document and press ctrl-v (paste) 9. Save that txt file somewhere. Doesn't really matter where, just remember the directory. 10 Go back into microDEM, and select GPS -> Waypoints from the Menu. A Waypoints window will pop up. 11. In that window click Read from file and select your previously created txt file. All the locations should appear in the waypoints list. The additional data like MGRS, Easting, Northing and Elevation will automatically be calculated. 12. Click on WPTs and define how you want your points to look 13. Check Waypoint names on map near the bottom of the window, if you want the names to be displayed 14. Click Add on map and Plot on map 15. If the points aren't displayed on the map yet, click Force redraw (that blue arrow thing) in the map window. 16. Now you can zoom and move around your map as usual and of course save the map image. All further steps are done with Photoshop/Gimp and of course in OpenTTD. Zydeco kindly uploaded a patch he had made some time ago which allows importing the cities into the scenario editor. This way, you no longer have to use heightmap bump references to place cities manually. So, using Moki's method and Zydeco's patch, we can directly fetch a geographic map and city locations & populations from the net, and then import this data into OpenTTD. By other words, we would take steps 1 to 4 of Moki's method, and then use the import town patch. As Moki said:Zydeco wrote:When I was making a few maps from heightmaps a few years ago, I made a patch that would read towns and coordinates from a CSV file, and place them automatically, or place a sign if the towns failed to found. I've updated it for the current version, and tried with your data (although I set all the town sizes to small, it would require manually growing them. It also fails to place a few, because my edge coordinates are a bit off, but it saves lots of work finding where to place towns. I've uploaded the patch, heightmap and the towns file I made by rearranging yours into the format needed by my patch, and the heightmap resulting from running the import_towns path/to/towns-pt.txt command in the editor, without doing any further adjustments To-doMoki wrote:With a tiny bit of reformatting (which can be automated), the data should also be usable with Zydeco's patch. It's definitely possible to automatically translate the PC ratings directly into town sizes and the coordinates are already in the right format. We can now fine-tune the patch so that it can take input from Moki's method directly, whilst preventing the coordinate offset he mentioned. Next, all that's left is taking the input of the cities' population and scaling them directly to ingame population, maybe according to some user-defined parameters like "minimum city population" or "maximum city population"... We're really close to having the whole process complete Next is the town import patch provided by Zydeco, along with a Win32 Build compiled by Moki with the import_towns patch applied. The patch adds a command to OpenTTD console: import_towns. So, in order to use it, you open the scenario editor, open the console (pressing the \ button over the tab keyboard button) and write import_towns <file_name>. At least this is the principle of it, not sure if the very specific details are correct (a more solid guidance for this shall be written here later). Stay tuned!
OPCFW_CODE
Anna Wilbik is an Assistant Professor in the Information Systems Group of the School of Industrial Engineering at Eindhoven University of Technology (TU/e). Her areas of expertise include artificial intelligence, neural networks, expert systems, information systems and databases, business intelligence, data mining, machine learning and predictive modeling. Anna’s research interests are focused on linguistic summaries, data analysis, machine learning, and computational intelligence. She is especially interested in application of these areas in healthcare and is part of the TU/e BMT Institute for Biomedical technology. Anna Wilbik received her PhD in Computer Science from the Systems Research Institute, Polish Academy of Science, Warsaw, Poland, in 2010. She also holds an MSc in Computer Science (with honors) from Warsaw University of Technology. Anna is an alumnus of the Stanford University TOP500 Innovators: Science - Management - Commercialization Program. In 2011, she was a Post-doctoral Fellow with the Department of Electrical and Computer Engineering, University of Missouri, Columbia, USA. Anna has previously held positions at the Faculty of Mathematics and Information Science at Warsaw University of Technology and at the Warsaw School of Information Technology and translated several academic books for PWN Polish Scientific Publishers. She has published over 50 papers in international journals and conferences and was part of organization of a few conferences in the area of IT and computational intelligence. Anna is also chair of the IEEE CIS task force on Linguistic Summaries and Description of Data. On fuzzy compliance for clinical protocols17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2018) (2018) A model based simulation toolkit for evaluating renal replacement policies2017 Winter Simulation Conference (WSC 2017) (2018) On the interaction between feature selection and parameter determination in fuzzy modelling17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2018) (2018) An enhanced approach to rule base simplification of first-order takagi-sugeno fuzzy inference systems4th Annual Symposium on Combinatorial Pattern Matching (CPM1993), June 2-4, 1993, Padova, Italy (2018) Finding the optimal number of features based on mutual information10th Conference of the European Society for Fuzzy Logic and Technology, (EUSFLAT 2017) and 16th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, IWIFSGN 2017, 11-15 September 2017, Warsaw, Poland (2018) - Business Intelligence - Business information systems management - Business Modeling No ancillary activities
OPCFW_CODE
Can I Use Machine Learning to Predict the Likelihood of an Order Being Shipped? Let's say I have a database of freight orders. The job is to match freight carriers with customers who need their freight moved. I have the customer's information, the freight carrier's information, and all details related to the freight orders including date ordered, date shipped, the amount of money it took to hire the freight carrier, and whether a carrier was even found to ship the order. If I have thousands of these past freight orders, could I use machine learning to look at future freight orders to predict whether or not a freight carrier will be found to move it? Bonus: If it is possible, what steps would I need to take to find the best data points to focus on? From what I understand, I need to convert everything to a number in order to train the classifier, but I am having trouble figuring out what data features are going to help make these types of predictions. I have been studying how to do machine learning and I am not looking for somebody to tell me everything there is to know on the subject, I just don't know how to determine what data points are going to be useful and am also looking for an answer to whether or not this is something machine learning can do(or if it's something a beginner in machine learning can do). Sorry if the question seems vague, it's kind of hard to articulate on a subject you are just starting to learn about. If anybody has materials they can link that would help me to better understand these things, I would appreciate that as well.. This sounds too broad to me. Community votes? (Your first step is to study how machine learning works. It sounds to me like you are asking "I don't know anything about machine learning; can you tell me everything I need to know about the field to solve this problem?" That sounds too broad to me. I'd suggest you start by doing some self-study, then come back when you can formulate a more specific question. Alternatively, can you edit the question to ask about a specific aspect of the problem?) Well I have spent hours studying machine learning, and there is very little information I have found that is helpful due to not knowing much about the subject, so finding places to start is difficult to say the least. I know how to use the library I am using to do what I need to do, the problem is that I don't know much about statistics and I don't know what data points are going to give me the best predictions. The rest of it I can figure out. I don't need any help with the library. @DavidRicherby, The official question is, "Can I use Machine Learning to Solve this Problem?". Listing steps I would need to take to get good predictions would just be icing on the cake, but I think the question is definitely on-topic. I guess the library wouldn't matter honestly, I am more concerned with if machine learning can solve the problem or not. @DuckPuncher I agree that the edited question is wholly on-topic. @DavidRicherby, Yeah I had to edit once I realized that it did in fact sound like a different question :P "What data features are going to be useful?" are not computer science questions. Those are questions that relate to your particular application domain: freight ordering. So, I don't see how we could possibly tell answer that for you -- you'll have to tell us. But in a similar vein: Have you read about feature selection? @D.W., I haven't read about feature selection yet, but after googling it that seems to be what would answer that bonus question. Thanks! Here's the thing to know: machine learning assumes that there is some sort of statistical distribution, and you "learn" that distribution, in order to get the probability of some event. The common saying in machine learning is "garbage in, garbage out." You have a bunch of random variables, but if those variables are all statistically independent, then you won't get anything useful from machine learning. Say, for example, that there was a strong correlation between the amount of money paid for the carrier, and whether the item is shipped. Machine learning would likely be able to discover this relationship. Or, if there were certain periods of the year where an item was more or less likely to be shipped. Machine learning could find this out. But, if there's no underlying pattern in the data you give to your training algorithm, then you will get no useful information out of it. For your question of finding the best data points: don't. That's what the machine learning algorithm is for. You give it your data, and it looks at which ones are statistically relevant, given some threshold. The whole point of machine learning is letting the algorithm do that for you, and more importantly, that's all the algorithm can really do for you. Generally, these algorithms will work better when they have more data, so don't go out of your way to remove what data you're giving it. And, remember, that ALL these algorithms are doing is statistics. If it says that the item is shipped with probability 0.6, don't be surprised when it isn't shipped. And, it's possible that it will say the item is shipped with probability 0.99, but it won't ship, because there's some variable which is hugely important in real life, which you didn't have recorded for your data set. If that variable isn't correlated with your data, then your model will be no good. this is exactly what I was looking for! I have troves of data and we think there are patterns at play here. Thanks! You might want to look at WEKA as a toolkit for doing your tasks. I'm guessing you'll want to start with decision trees, SVMs, and Naive Bayes as the simplest classifying algorithms. But that's not more StackOverflow territory than CS.SE. thanks. I am actually using scikit-learn along with a few other python modules to tackle this, but if it doesn't seem to do the job I need it to do, I will give it a shot. I'm willing to try anything at this point. Thanks again! Yeah, Python has a library for everthing, so there's a good chance scikit-learn will do the trick for you. Good luck! Yes, you can definitely use machine learning to predict the likelihood of an order being shipped. This prediction can be helpful for various purposes, such as optimizing inventory management, improving logistics, and providing better customer service by anticipating delays. Here's an example of how you could approach building a machine learning model to predict the likelihood of an order being shipped on time: Data Collection: Gather historical data on orders from your database or e-commerce platform. The data should include features such as order date, shipping date, order location, shipping location, product type, shipping method, and any other relevant information. Data Preprocessing: Clean and preprocess the data to handle missing values, encode categorical variables, and scale numerical features if necessary. This step is crucial for ensuring the data is in a suitable format for machine learning algorithms. Feature Engineering: Create additional features that might be useful for predicting shipping likelihood. For example, you could calculate the time difference between order date and shipping date, or extract features related to the shipping destination, such as distance from the warehouse or average shipping time to that location. Model Selection: Choose an appropriate machine learning algorithm for the prediction task. Some common choices for binary classification tasks like this include logistic regression, support vector machines, decision trees, random forests, or gradient boosting algorithms. Model Training: Split your preprocessed data into training and testing sets. Train the selected machine learning model on the training set using the labeled data (orders that were shipped on time and those that were not). Model Evaluation: Evaluate the performance of your model on the testing set using metrics such as accuracy, precision, recall, or F1 score. These metrics will help you assess how well the model is predicting the likelihood of orders being shipped on time. Model Deployment: Once you're satisfied with the model's performance, deploy it to make predictions on new orders in real-time or on a periodic basis. Example: Let's say you work for an online retail company, and you want to predict the likelihood of an order being shipped on time. You gather historical data on orders placed over the past year, including order date, shipping date, order location, shipping location, and product type. After preprocessing the data and engineering features like the time difference between order date and shipping date, you decide to use a random forest classifier for the prediction task. You split the data into a training set (70%) and a testing set (30%) and train the model on the training set. After training, you evaluate the model on the testing set and find that it achieves an accuracy of 87%, which is promising. Now, you can deploy this model to predict the likelihood of shipping orders on time for new orders as they come in, helping your company optimize logistics and improve customer satisfaction. Keep in mind that the success of the model depends on the quality of the data and the features used, so continuous monitoring and refinement of the model may be necessary to maintain accurate predictions over time. I learned to do this with the help of https://www.eduonix.com/live-data-science-certification-program
STACK_EXCHANGE
Time for my obligatory biannual update post! I need to do these way more often. Here are a few awesome milestones: - My YouTube channel just passed 10,000 lifetime views! Wow! Thanks to everyone who’s watched. - The channel’s also about to hit 100 subscribers – we’ve got 7 to go! - Dungeon Mage turned 2 years old! I finished coding it in July 2013, and got it published in August – Where has the time gone?! - SpeakEasy, Infinite Parallax, and Easy Mobile Controls are all on the front page of the GameMaker Marketplace! Other than that, not much has changed, because I have not spent much time programming! (At least not games.) Life has just been way too busy. I’ve also had some emotional ups and downs along with a few reflective moments that have made me start to reevaluate whether or not game development is a hobby I really want to pursue any more. I haven’t disclosed this before, but I’m a high school student. I’ll be a senior in the fall. Going into high school, I was pretty crazy about getting an actual game shipped. The closest I got to this was publishing Dungeon Mage, which I coded in about a month and a half during the summer following my freshman year. Most of the major projects, published and unpublished, that I’ve worked on got developed around this time period. My work output peaked during early high school, and since then, as I’ve gotten more and more busy with other things, I’ve worked on and published games less and less. (I have three or four fairly polished prototypes I’ve just been too lazy to put out there.) In addition to being busier, as a person I’ve started to burn out a bit, and my life priorities have shifted. I’m not yet sure how or even if game development fits into them. Most recently I’ve been working in web app development using a platform called Meteor; it’s pretty nifty (and also reactive). But even webdev I haven’t touched in over a month, and I’m not sure when I’ll go back to it. I’m kind of taking a break from computers. I’m currently doing an internship in New York City working with refugees; it’s turning out pretty great, and I’m enjoying spending comparatively less time in front of screens. What I’ve slowly been coming to realize is that time is a very limited commodity: I don’t have a lot of it, and it’s most important for me to spend the little that I do have on things that really matter to me. And right now, I’m not sure gamedev is one of those things. I’ve learned a ton from the work I’ve done; I’ve picked up a lot of skills, and I’ve got the beginnings of a resumé. But it’s time to move on. I’ve been doing this since I was eight years old (almost a decade now), and I’m going to try something else. I’ve been screwing around on SoundCloud lately (and when I say screwing around, I mean screwing around – it’s all pretty bad), and I’m thinking about starting a new non-gamedev-related YouTube channel. I might even start regularly writing a blog. (Medium is a pretty cool website for getting written content out.) I’d also like to improve my art skills. One of my biggest priorities is to establish an actual social media presence, because so far I’ve done a terrible job of networking. (As an introvert, I’m kind of terrified of the social Internet. Gonna change that!) But honestly, I have no real plans for anything specific, or even anything technology-related. I’m just going with the flow. If that flow leads me back to computers, great. If not, great. Thanks to everyone who’s followed my sporadic progress over all these years. I appreciate you and the support you’ve given me. You guys are great! I’ll probably come back to gamedev, maybe when I have a little more time on my hands. But for now, I’m putting the game-development side of ShroomDoom Studios on hiatus.
OPCFW_CODE
[meta-xilinx] microZed using meta-topic nathan.rossi at xilinx.com Wed Mar 26 23:46:17 PDT 2014 > -----Original Message----- > From: meta-xilinx-bounces at yoctoproject.org [mailto:meta-xilinx- > bounces at yoctoproject.org] On Behalf Of Oleg K Dzhimiev > Sent: Wednesday, March 26, 2014 11:24 AM > To: Alan DuBoff > Cc: meta-xilinx at yoctoproject.org > Subject: Re: [meta-xilinx] microZed using meta-topic > I don't know why, but if I do the following, look at the failure. > doesn't fail ? > Yes, works for me. > All I have found so far on your error are suggestions: > 1. bad hard drive > 2. something's with the network. > Just in case here's my output: > $ git clone -b master-next git://github.com/Xilinx/u-boot-xlnx.git > Cloning into 'u-boot-xlnx'... > remote: Reusing existing pack: 253001, done. > remote: Counting objects: 6, done. > remote: Compressing objects: 100% (6/6), done. > remote: Total 253007 (delta 0), reused 0 (delta 0) > Receiving objects: 100% (253007/253007), 69.46 MiB | 732 KiB/s, > Resolving deltas: 100% (201898/201898), done. > $ git fsck > Checking object directories: 100% (256/256), done. > Checking objects: 100% (253007/253007), done. > Checking connectivity: 253007, done. So I have seen some failures too, regarding git protocol on GitHub, I thought they were just due to internal proxy issues. Haven't had problems with http/https (and it is faster for me). So I have done what GitHub recommends anyways and changes all the github.com/Xilinx/* URI's to use the https protocol. I have pushed those changes to master. I would recommend doing this change on the meta-topic and meta-ezynq layers as well (at least for the Xilinx repos). More information about the meta-xilinx
OPCFW_CODE
Increasingly, existing large datasets (such as the Geriatrics and Extended Care Data and Analysis Center [GEC-DAC] dataset) and prospective observational/quasi-experimental studies are being used to examine important research questions in seriously ill older adults and to explore new models of care delivery. Randomized controlled trials can be burdensome to seriously ill patients or infeasible to conduct, and they may not produce results generalizable to the population of interest. Observational data analyses in geriatric palliative care must account for severe treatment endogeneity, which occurs when factors are simultaneously associated with treatment likelihood and outcomes. Propensity scores are one way to address endogeneity. A propensity score is the estimated probability of treatment receipt, conditional on a set of observed covariates that are thought to be associated with both treatment likelihood and outcome. An unbiased treatment effect can be estimated by comparing treated and comparison individuals with similar propensity scores. Most guidance on propensity scores is restricted to methods for matching individuals with similar propensity scores across two groups (treatment, no treatment). Many treatments, however, have multiple levels, and restricting treatments to binary indicators obscures differences between groups. Weighting by propensity scores is a superior alternative to matching when there are multiple treatment groups. This study aims to develop best practices for using propensity scores for multimodal treatments and to strengthen researchers' abilities to use existing VHA data to improve health care value and efficiency for older veterans. Specifically, this study will 1) use simulated data to determine which weighting/estimation combination (inverse probability weighting or kernel weighting by propensity scores estimated via regression with maximum likelihood estimation, covariate-balancing propensity score estimation, or generalized boosting methods) provides the most efficient estimates with the least bias in a variety of estimation scenarios, 2) determine which weighting/estimation strategy provides the best observed covariate balance (a secondary measure of propensity score performance) across multiple treatment levels in a variety of simulated estimation scenarios, and 3) determine which weighting/estimation strategy is the least susceptible to residual confounding. Traditional Monte Carlo and plasmode (empirically based) simulations will be used to achieve the aims. To facilitate translation of results, we will repeat Aims 2 and 3 in empirical datasets with different sample sizes and expected treatment effect heterogeneity. Results will be verified by estimating effects of sedative-hypnotics on risk of in-hospital death in previously collected data from a study of 100,000 hospitalized veterans with cancer, heart failure, chronic obstructive pulmonary disease, and/or HIV/AIDS and from a study of 300,000 veterans with an opioid prescription. Simulations and analyses are ongoing; no findings to report to date. We expect to identify patterns of superior performance for strategies in common estimation scenarios as well as scenarios in which inferences are most likely to diverge. We will develop training materials based on our results and work with an advisory committee of leaders in observational data analysis to disseminate these results widely and inform studies of non-randomized health care interventions (such as post-hospitalization referral to Geri-PACT) as well as studies using VHA "big data" resources. Throughout and after the study period, we will disseminate current best practices to researchers and analysts throughout the VHA. External Links for this Project Grant Number: I01HX002237-01A1 - Garrido MM. Robust Evaluations of Intensive Care Unit Length of Stay Using Observational Data. [Letter to the Editor]. Journal of palliative medicine. 2018 Mar 1; 21(3):280. [view]
OPCFW_CODE
M: How speedy is SPDY? [pdf] - ldng https://www.usenix.org/sites/default/files/conference/protected-files/nsdi14_slides_wang.pdf R: bsdetector > Most performance impact of SPDY over HTTP comes from its single TCP > connection This is not a surprise at all, because Google _never tested SPDY against HTTP pipelining_. And at least in their mobile test they included the TCP connection time for HTTP but not for SPDY; I suppose their test software just reused the same SPDY connection since the mirrored web pages all were served from the same IP. They compressed sensitive headers with data leading to CRIME attacks. They had a priority inversion causing Google Maps to load much slower through SPDY than HTTP. This new protocol is a complete mess, from beginning to end. R: Lukasa Leaving aside the technical statements about SPDY, the reality of HTTP pipelining is that no-one uses it. According to Wikipedia, Opera is the only major browser that ships with pipelining enabled. Most intermediaries don't support pipelining either. Pipelining was a well-intentioned feature which didn't solve the core problem: namely, that a big or slow request can block you from doing anything else for a really long time unless you open another TCP connection. R: bsdetector Except that Microsoft tested SPDY against pipelining and found that pipelining was essentially just as good. So we're left with a situation where Google could have used HTTP pipelining over SSL (so there's no buggy proxies interfering, just like what SPDY does) and gotten pretty much all the benefit with no extra complications at all, but instead there's old HTTP and a new, much more complicated protocol. And this "head of line blocking" problem... who said it was a problem, Google? In reality you have 4 or more connections that automatically work like spread spectrum where most resources aren't stuck behind a big or slow request. But even if this was an actual problem, a simple hint in the HTML that the resource might take a while and to put other ones on a separate connection would optionally solve this problem, and with almost no extra complexity. R: jgrahamc _Except that Microsoft tested SPDY against pipelining and found that pipelining was essentially just as good._ Can you point to that? R: youngtaff I think parent is referring to [http://research.microsoft.com/pubs/170059/A%20comparison%20o...](http://research.microsoft.com/pubs/170059/A%20comparison%20of%20SPDY%20and%20HTTP%20performance.pdf) It's a bit sketchy on the details and data - reading it I certainly end up with more questions than answers R: Mojah Very nice research, kudos to everyone involved. I agree with the conclusions, mostly the very last one. > To improve further, we need to restructure the page load process To fully utilise the potential of HTTP/2, we will have to rethink the way we create and manage websites. I've posted more thoughts on this on my blog; [https://ma.ttias.be/architecting-websites- http2-era/](https://ma.ttias.be/architecting-websites-http2-era/) R: YZF It's not surprising that introducing high loss through a network emulator results in reduced performance of a single TCP connection vs. multiple connections. That's because there's a relationship between the maximum bandwidth a single TCP connection can carry and the packet loss % due to TCP's congestion avoidance. Introducing "fixed" packet loss through an emulator isn't necessarily a good representation of a real network where packets would be lost due to real congestion (an overflowing queue). Throwing many TCP connections into a congested network can let you get a higher share of that limited pipe though... R: eggnet Packet loss on wireless networks often have a certain rate of packet loss unrelated to congestion, due to having not a great signal or interference. It's somewhat humorous because wireless networks are one of the main purported benefits of SPDY. R: moyix Kudos to them for releasing their data and tools [1]! This is how science should work. [1] [http://wprof.cs.washington.edu/spdy/data/](http://wprof.cs.washington.edu/spdy/data/) R: Animats That's a nice study. The main result is that most of the benefit comes from putting everything through one TCP pipe. This, of course, only works if almost everything on a site comes from one host. This is a good assumption for Google sites, which communicate only with the mothership. It's not for most non-Google sites. R: josteink And for those sites, you can always use HTTP pipelining and avoid the whole SPDY can of worms. Looking at the facts, it seems pretty obvious that whatever theoretical gains you can get in select scenarios with SPDY, this super-minor gain is not worth it compared to the associated complexity cost. Not to mention I don't like the idea of Google now not only running the worlds tracking units, the worlds most popular browser and most popular websites, but now also dictating internet-protocols without taking input from other parties. R: youngtaff What's the complexity cost in SPDY or HTTP/2's case? For most optimised HTTP/1.x sites there's already a complexity cost of merging JS files, merging CSS files building sprites - including the tradeoff of getting the bundles right, which of course reduces cachability. R: josteink > For most optimised HTTP/1.x sites there's already a complexity cost of > merging JS files, merging CSS files building sprites - including the > tradeoff of getting the bundles right, which of course reduces cachability. And all of this is a build-time problem. If we're going to engineer the HTTP-protocol to solve build-tooling and development related problems, we might as well add JS-linting and minifying to HTTP itself as well. Seriously: This problem is best solved elsewhere. R: youngtaff You've got it wrong the way around these aren't build-tooling and development related problems, they're problems with HTTP/1.x that we chose to solve using the build process. R: Hengjie Trivia: the PDF is in a folder called "protected-files" \- LOL
HACKER_NEWS
29 Sep 2019 Download the Microsoft JDBC Driver for SQL Server to develop Java applications that Microsoft JDBC Driver 7.0, 7/31/2018, JRE 8, 10. USA - Toll Free: +1-866-221-0634. USA - From abroad: Windows x64 downloads. Windows MSI Installer (AMD64 / Intel EM64T), 5.1.5, 9.3M, Download Driver, 5.1.5-0, 1.5M, Download Solaris 10 (SPARC, 32-bit), 5.1.5, 4.5M, Download. You must download the repository from the MySQL site and install it directly. this line: symbolic-links = 0 key_buffer_size = 32M max_allowed_packet = 32M 10 #max_binlog_size = 100M #log_bin should be on a disk with enough free space. Install the JDBC driver on the Cloudera Manager Server host, as well as any The MariaDB database server is published as free and open source software under the General Public License version 2. in Java to MariaDB and MySQL databases using the standard JDBC API. MariaDB Connector/J .jar files are available at: https://downloads.mariadb.com/Connectors/java/ Download 1.3.7 Stable. Vertica Downloads Client Drivers. 9.3.x client driver checksums 9.2.x (RPM), ODBC and vsql, Package contains 64-Bit version for FIPS-enabled host. Windows 9.0.x (TAR), ODBC, JDBC, Python, and vsql, Package contains both 32 and 8.1.x, ODBC, JDBC, and vsql, Package contains 64-Bit version, Hotifix 7 Download mysql JAR files ✓ With dependencies ✓ Documentation ✓ Source code. Download all versions of mysql-connector-java JAR files with all Electric Cloud does not distribute the JDBC drivers for MySQL or Oracle databases. If you want to use one of these databases, you must download its driver directly from the On Windows, the default location is C:\ECloud\i686_win32\lib\. Go to http://dev.mysql.com/downloads/connector/j and with in the dropdown select "Platform Independent" then it will show you the options to 30 May 2015 In this video tutorial I have recorded the steps to download the JDBC Driver for Mysql. You have to visit 12 Apr 2018 This video shows the ODBC and JDBC driver installation tasks on Windows before you can use MySQL Connector from Informatica To download the JDBC driver, go to: https://downloads.mysql.com/archives/. 3:43:32 · How To Insert Image Into Another Image Using Microsoft Word - Duration: 14:13. 14 Oct 2019 Download MySQL Connector/J - A database connector for MySQL servers The connector uses a JDBC driver for retrieving information from the database Generally, one cannot work with a MySQL database inside a Java app without it. DOWNLOAD MySQL Connector/J 8.0.18 / 5.1.46 for Windows. 15 Sep 2019 Download JDBC driver JAR files for MySQL, SQL Server, Oracle, PostgreSQL, SQLite, Derby, Microsoft Access. Maven dependency is There is a distribution of Apache Derby comes with JDK 7 called Java DB. So if you are Free download page for Project id2d's mysql-connector-java-5.1.15-bin.jar. within Oracle, IBM DB2, Sybase, Microsoft SQL Server and MySQL databases. Download mysql JAR files ✓ With dependencies ✓ Documentation ✓ Source code. Download all versions of mysql-connector-java JAR files with all Electric Cloud does not distribute the JDBC drivers for MySQL or Oracle databases. If you want to use one of these databases, you must download its driver directly from the On Windows, the default location is C:\ECloud\i686_win32\lib\. You must download the repository from the MySQL site and install it directly. this line: symbolic-links = 0 key_buffer_size = 32M max_allowed_packet = 32M 10 #max_binlog_size = 100M #log_bin should be on a disk with enough free space. Install the JDBC driver on the Cloudera Manager Server host, as well as any dbExpress driver for MySQL Download (2020 Latest) for dbExpress driver for MySQL is a database-independent layer that defines common interface to provide fast access to MySQL from Delphi and C++Builder including Community Edition on Windows and macOS for both 32-bit and 64-bit platforms. For this server, dbExpress provides a driver as an independent library that implements the common dbExpress interface for processing queries and stored procedures. MYSQL installer for Windows 7 64 bit OS? - Stack Overflow MySQL Installer is 32 bit, but will install both 32 bit and 64 bit binaries. Will it automatically install appropriate version(32 bit/64 bit) as per my current OS(as i do not see the separate binaries files for 64 bit and 32 bit).i downloaded the first one from the above link as per anwers below and got only one installer file mysql-installer jdbc driver for mysql free download - SourceForge jdbc driver for mysql free download. Hibernate Hibernate is an Object/Relational Mapper tool. It's very popular among Java applications and impleme Dec 14, 2019 · A non-JDBC method can permit using this kind of query without this security issue: The application has to create an Mysql java jdbc with the file to load. If MariaDbStatement. Since 1. These interceptors must implement the org. MySQL Java tutorial - MySQL programming in Java with JDBC; Connect to MySQL with JDBC driver – MYSQL ODBC 5.2A DRIVER FOR WINDOWS 7 Aug 25, 2018 · 10g jdbc drivers for windows; maven mysql jdbc driver download; intel hardware accelerated execution manager failed to configure drivers; sharp mx-2310u pcl/ps printer drivers for windows 7; mysql odbc ansi driver; panasonic kx-mb3020 driver; connection failed hy000 mysql odbc 5.2a driver; csr bluetooth 2.0 drivers for mac; nec i-select m5410 ODBC Administrator tool displays both the 32-bit and the May 13, 2013 · The 64-bit ODBC Administrator tool can be invoked from Control Panel to manage user DSNs and system DSNs that are used by 64-bit processes. On a 64-bit operating system, the 32-bit ODBC Administrator tool is used for Windows on Windows 64 (WOW64) processes. Search results - Microsoft Download Center Download the Microsoft JDBC Driver 7.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Editions. 32 and both 64 bit JDBC in java at same time - Stack Overflow simultaneously use a 64-bit ODBC driver and a 32-bit ODBC driver from the same Java application, in either a 32-bit or 64-bit JVM You just need to use a Type 3 JDBC connection -- a "multi-tier" JDBC-to-ODBC Bridge -- such as the Enterprise Edition JDBC Driver for ODBC Data Sources from my employer , to bridge the "bitness" gap. JDBC DRIVER free download - SourceForge JDBC DRIVER free download. jTDS - SQL Server and Sybase JDBC driver Open source JDBC 3.0 type 4 driver for Microsoft SQL Server (6.5 up to 2012) and Sybase ASE. jTDS is MySQL Community Server Free Download for Windows 10, 7, 8 - adobe flash player chrome plugin - carte du ciel application windows - installer lapplication société générale - hp scan win 10 - itunes telecharger pour windows 10 - télécharger adobe acrobat reader latest version - telecharger open office gratuit sans code - orc must die 2 gratuit - rayman jungle run gratuit télécharger uptodown - comment telecharger jeux xbox 360 sur xbox one - telecharger icloud pc windows 7 gratuit - telecharger logiciel hercules dj control mp3 e2 gratuit - logiciel gratuit transformer un pdf en word - passer à la ligne suivante excel - spider solitaire windows xp télécharger kostenlos deutsch - telecharger google chrome pour windows xp sweet - logiciel emploi du temps gratuit fet - mac os x 10.10 yosemite iso - mon ordinateur portable ne veut plus séteindre - site pour faire montage photo video gratuit - call of duty modern warfare 2 pc gratuit - télécharger qr code reader pro apk - internet explorer 7 win 7 64bit - magix video easy hd probleme - the legend of zelda ocarina of time 3d final boss
OPCFW_CODE
If you choose to update, many of your programs and Windows 8 desktop apps are designed to function properly. Microsoft is trying to push its store and in order to be in it, you have to follow their rules. Lenovo's E50 is priced extremely temptingly Cheapest Windows 8. Thanks for clearing that up. What difference of those three models is only on screen size, screen resolution, and battery. . No matter what your shipping needs, Walmart's got you covered. I'm serious, because everything I've seen so far either no longer works, or involves something that most people are not going to want to do. As far as the fears of a less open platform go, I think that's silly. There are a number of ways that you can get your eager hands on a copy of Windows 8. If you can't find your license keys, might be able to pull them for you. If you are a person who needs to have more than one web tab open at once, computers with Windows 8 allow you to view two web tabs side by side. You can switch from one computer to the next and still keep all of your essential documents in one place. Adding a Microsoft account is helpful if you are using your Windows 8 computer for work or school. What is different with Windows 8? This is used to remember your inputs. Xbox Live integration: There will be achievements for things like freecell, I'm kind of psyched about this. For supported machines, boot up times are faster than that of Windows 7 reported to start up within 14 seconds from an off state. Metro start and the Microsoft Store are among many of the available features on a desktop computer with Windows 8. Any computer with Windows 8 can handle your workload and school work without requiring expensive accessories or add-ons. The Metro-style interface is launched at startup so you won't see the old desktop rather you will see Tiles from apps that dynamically changes to provide notifications and alerts. Pick the one that suits you best. Optimized and built specifically to be used on touch screen devices, such as tablets and 2-in-1s, the Metro Start screen gives updates for applications like Twitter and Facebook, and the search feature includes local system results as well as results from the web. But I just want to know straight up. A Windows 8 desktop computer has a newer interface than Windows 7 and is designed for touchscreen operation. Make a list of the programs you want to reinstall and make sure you have the installation files available. You cannot receive a refund if you have placed a ShippingPass-eligible order. The only thing that I will hate when some company I support goes to Windows 8 if they do , is educating users on the lack of a visible start menu. The only thing that I will hate when some company I support goes to Windows 8 if they do , is educating users on the lack of a visible start menu. First released around 2014, Advan Vanbook W80 was tagged around Rp. You just eventually get forced into it. A nice fresh installation and a fair shot at impressing me on my main notebook. We will not tolerate any kind of incitement to action against anyone, nor will we allow the posting of information that can be used to harm others celebrities or not. As you'll guess from the name, it's an 8-inch tablet, and the star feature is the 1920x1200 resolution display. I can say that most people that have used my computer catch on quickly with the Metro interface because I took a long time to make it pretty organized and didn't hide any icons, I must made spaces for apps instead of it being totally disorganized when you first install. I'm not going to argue about it and people can bash it all it wants, however under the hood Windows 8 is absolutely great. Daniel Rubino is executive editor of Windows Central. And here's the truth that debunks that idea: I'm fairly certain it's still possible, it just isn't as easy anymore I believe whereas before you could just flip a registry key before they took that away. The majority of people who post in this section build their computers. First thing that will surely catch the eye of the user is the Metro-style interface which has been redesigned for use with touchscreens, keyboards and mice and is similar to that of the Windows Phone 7 interface. When we do giveaways we do it within the bounds of our subreddit and only for subreddit members. Clone your system before you upgrade If for some reason, your upgrade turns into a nightmare, reverting back to your old version of Windows might become your only choice. Whats the cheapest way to get Windows 8? What are some features of Windows 8 computers? It's a perfect buy for folks not only looking for a value tablet but also those hunting for the 2-in-1 experience since it has an optional keyboard attachment. This is more expensive than Windows 7 by about 10% which means that the latter is probably a better upgrade route for Windows 10. You also benefit from better file operations with Windows 8. The opinions expressed are those of the writer. It's tempting to run straight to the , and you wouldn't be making a bad choice at all, but you can get a similar experience for half the price with the most excellent. There is often an update available from the manufacturer that is easy to use. With the medium specifications with such affordable price, Advan Vanbook W80 Windows tablet is a very good deal don't you think :D? Do you need a Microsoft account to use Windows 8? Long story short - I want to give Windows 8 another try. Though Chrome and Firefox can sync your bookmarks, it wouldn't hurt to save a local copy of your bookmarks too. If the Windows Upgrade Assistant flagged items, check your system manufacturer's Web site for the latest drivers on things like, printers, touch pads, graphics cards, and audio cards. I would recommend getting used to the standard way where you click in the bottom left and it pops up but has no visible start menu icon to actually select. I really want it, but I can't see dumping so much money into it right now. In this case, the Customer Care team will remove your account from auto-renewal to ensure you are not charged for an additional year and you can continue to use the subscription until the end of your subscription term. Oct 29, 2012 · Another popular Windows 8 question. In my opinion, it's a bit too pricey.
OPCFW_CODE
The term “emulation” comes from the verb “emulate,” which means to imitate or reproduce. Therefore, computer emulation is when one system imitates or reproduces another system. The Virtual Boy Advance emulator allows users to play Game Boy Advance games on Windows or Macintosh computers. Hereof, what is an emulator and how does it work? In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Additionally, are emulators illegal? Emulators are legal to download and use, however, sharing copyrighted ROMs online is illegal. There is no legal precedent for ripping and downloading ROMs for games you own, though an argument could be made for fair use. Here's what you need to know about the legality of emulators and ROMs in the United States. Subsequently, one may also ask, what is an emulator used for? Emulation refers to the ability of a computer program in an electronic device to emulate (or imitate) another program or device. Is there an Android Emulator on Linux, like MEmu for Windows, with multiple instances? What are the various components of emulator? Emulators are usually composed of three components: - CPU emulator (the most complex part) - Memory sub-system emulator. - Different input/output device emulators. How does an emulator work? Emulators are a class of computer software that allow one computer system, the host, to simulate a different operating system, in order to run an application meant for the foreign system. There's a good chance you've messed around with emulators before, if you've downloaded a console emulator, for instance. How safe is bluestacks? Yes, Bluestacks is 100% safe. Sometimes, few Antivirus software on Windows PCs detect Bluestacks Android Emulator as malware but it is not true at all. Sometimes Antivirus can be absolutely wrong. You can use the Bluestacks App Player with confidence. What is difference between emulation and simulation? Simulation is when you are replicating, by the means of software, the general behaviour of a system starting from a conceptual model. Emulation is when you are replicating, in a different system, how the original system actually internally works considering each function and their relations. How do you create an emulator? How to Create an Android Emulator in Windows - Download and install VirtualBox. - Download the latest version of Android x86 from android-x86.org. - Launch VirtualBox. - Click New. - Select at least 1024MB of RAM and click Next when prompted for memory size. - Select Create a virtual hard drive and click Create when prompted to choose a drive. Is there a ps4 emulator? So the answer is no, there is no PS4 emulator available. And it will probably not be any in the foreseeable future. If you want to play a PS4 game, then buy a PS4. Is Emuparadise safe? YES, Emuparadise is a legit and safe site. You can download as many games and ROMs as you want. The free plan is decent for 95% of their audience. There are still 5% who upgrades to their paid plans which is pretty cool. Is Android emulator legal? Android emulators are not illegal because the Android operating system is available in an open-source format. Also, using an app on any device (computer via emulator, phone, tablet, etc..) is legal if it being used as intended, and unaltered, on the proper operating system. Why is my emulator so slow? The Android Emulator is very slow. The main reason is because it is emulating the ARM CPU & GPU, unlike the iOS Simulator, which runs x86 code instead of the ARM code that runs on the actual hardware. The Android Emulator runs an Android Virtual Device or AVD. Which Android emulator is fastest? Here we have listed the fastest android emulator for PC below: - Nox App Player Emulator. Nox App Player is the best Fastest & smoothest Android Emulator for PC. - AmiDuOS. AmiDuOS is the easy & Fast emulator for PC. - Remix OS Player. Remix OS Player is one of the most and popular Android Emulator for PC. Which Android emulator is best? BlueStacks is probably the best known Android emulator among Android users. The emulator is preferred for gaming and is ridiculously easy to set up. Other than the Play Store, you have the option to download BlueStacks optimized apps from its own app store. What is a android emulator? An Android emulator is an Android Virtual Device (AVD) that represents a specific Android device. You can use an Android emulator as a target platform to run and test your Android applications on your PC. Using Android emulators is optional. Are emulators safe? Emulation is generally quite safe. There are a few things to watch out for, though. While SNES emulators are fine (and you should probably use Snes9x instead of Zsnes, its better), you can find websites for emulators for newer systems like PS4 or 3DS that will probably be viruses. What is the best free Android emulator? List of the Best Android Emulators for Windows 10 - NoxPlayer. NoxPlayer is an Android Emulator that comes for free to be installed over a Windows 10 OS. - BlueStacks. BlueStacks is a preferred android emulator for entrepreneurs who are engaged in developing apps within Windows 10 OS. - Phoenix OS. What is emulation in chemistry? 1. Emulation, in a software context, is the use of an application program or device to imitate the behavior of another program or device. Common uses of emulation include: Running an operating system on a hardware platform for which it was not originally engineered. How does an Android emulator work? An Android emulator is a tool that creates virtual Android devices (with software and hardware) on your computer. Note that: It is a program (a process that runs on your computer's operating system). It works by mimicking the guest device's architecture (more on that in a bit). What is the best emulator for PC? Bluestacks is one of the best emulators to play games because it is very powerful. In fact, they claim that they are 6 times faster than your typical smartphone. Bluestacks is available for Microsoft Windows or Mac, so you can play Android games regardless of your operating system. What is emulator in testing? An emulator is a software that mimics the hardware and software of the target device on your computer. Android emulator, Galaxy emulator and iPhone emulator (which is a misnomer for iOS Simulator actually) are some of the widely used emulators for software testing. Can you go to jail for downloading ROMs? There has never been a case (that I can recall) where a person has been prosecuted for downloading a ROM file off the internet. Unless they are selling/distributing them, no, never. Almost anything you download can land you in jail not to mention trying to sell any copyrighted material. Is Citra a virus? Citra is open-source, anyone can inspect the code themselves to verify it isn't a virus. You only risk getting a virus by installing third party builds with unpublished source code, or by downloading ROMs from shady sites (something not condoned by this sub). Of course it isn't a virus.
OPCFW_CODE
Time: Tue & Thu, 1:00-2:15 PM Location: 1221 Computer Science Description: This course will address the design of provably efficient algorithms for data processing that leverage prior information. We will focus on the specific areas of compressed sensing, stochastic algorithms for matrix factorization, rank minimization, and non-parametric machine learning. We will emphasize the pivotal roles of convexity and randomness in problem formulation, estimation guarantees, and algorithm design. The course will provide a unified exposition of these growing research areas and is ideal for advanced graduate students who would like to apply these theoretical and algorithmic developments to their own research. The course will roughly be broken into the following structure: Grading: Each student will be required to attend class regularly and scribe lecture notes for at least one class. A final project will also be required. This project will require a class presentation and a written report. The project can survey literature on a related topic not covered in the course or an application of the course techniques to a novel research problem. Prerequisites: Graduate level courses in probability (like ECE 730) and nonlinear optimization (like CS 726). An advanced level of mathematical maturity is necessary. Familiarity with elementary functional analysis (L2 spaces, Fourier transforms, etc.) will be helpful for the last part of the course. Please consult the instructor if you are unsure about your background. Lecture 1 (01/19): Introduction. Slides Lecture 2 (01/21): Introduction to Random Mappings. Related Readings: Proof of Whitney's Embedding Theorem pdf. Lecture 3 (01/26): Random Projections Preserve Distances. The Johnson-Lindenstrauss Lemma. Related Readings: Dasgupta and Gupta. An Elementary Proof of a Theorem of Johnson and Lindenstrauss. pdf Lecture 4 (01/28): Epislon Nets and Embedding Related Readings: Rudelson and Vershynin. The Smallest Singular Value of a Random Rectangular Matrix. Only the first 5 paragraphs of Section 2, Proposition 2.1, and its proof. pdf. Baraniuk et al. A Simple Proof of the Restricted Isometry Property for Random Matrices. pdf Lecture 5 (02/02): Sparsity and its applications. Lecture 6 (02/04): Prony's method. A proof of the invertibility of the Vandermonde System. pdf Lecture 7 (02/09): l1 minimization, Restricted Lecture 8 (02/11): l1 minimization, Robust Recovery of Sparse Signals Lecture 9 (02/16): Matrices with the Restricted Isometry Property. Lecture 10 (02/18): Algorithms for l1 minimization Related Readings: Wolfe. The Simplex Method for Quadratic Programming. pdf. Donoho and Tsaig. Fast Solution of l1-norm Minimization Problems When the Solution May be Sparse. pdf. Tropp and Wright. Computational Methods for Sparse Solution of Linear Inverse Problems. pdf. Lecture 11 (02/23): Matrix Norms and Rank Lecture 12 (02/25): Rank Minimization in Data Analysis, Hardness Results. Lecture 13 (03/02): Easily solvable rank minimization problems. Pass efficient approximations. Lecture 14 (03/04): The nuclear norm heuristic. Lecture 15 (03/09): RIP for low-rank matrices. The nuclear norm succeeds. Lecture 16 (03/11): Gaussians obey the RIP for low-rank matrices. Lecture 17 (03/16): Algorithms for Rank Minimization Lecture 18 (03/18): Iterative Shrinkage Thresholding for Rank Minimization Lecture 19 (03/23): Moving beyond Restricted Isometry Properties. Dual Certificates. Lecture 20 (04/06): Function Fitting. The Bias Variance Tradeoff. Lecture 21 (04/08): Generalization Bounds Lecture 22 (04/13): Kernels Lecture 23 (04/15): Approximation Theory. The curse of dimensionality. The blessing of smoothness. (4/20) Jesse Holzer, Benjamin Recht (4/22) Jingjiang Peng, Yongia Song, Suhail Shergill (4/27) Laura Balzano, Badri Bhaskar, Yuan Yuan (4/29) Hyemin Jeon, Chia-Chun Tsai, Alok Deshpande (5/4) Matt Malloy, Vishnu Katreddy, Nikhil Rao (5/6) Bo Li, Zhiting Xu, Shih-Hsuan Hsu
OPCFW_CODE
c# disposing syntax I've had to implement Dispose() functionality recently, and have come across 1 line method, 2 line methods and more comprehensive methods. A 1 line method/functionwould simply call something like "context.Dispose", but the method I picked up was this: bool _disposed = false; public void Dispose(bool disposing) { if (!_disposed && disposing) { _context.Dispose(); } _disposed = true; } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } Is this syntax merely to stop Dispose() being called more than once? You forgot the finalizer. Also make Dispose(bool) virtual. This can help: Implementing IDisposable and the Dispose Pattern Properly That's the legacy dispose pattern. IMO there is little reason to use it anymore. @CodesInChaos - If thats the case what should be used in its place? Depends on the situation. A SafeHandle as the direct owner of the unmanaged resources, and a simple Dispose method without finalizer as indirect owner. @CodesInChaos: You not having seen the need to use it lately doesn't mean it's not needed at all any more. @Maarten Knee-jerk addition of a finalizer simply to call Dispose(false) is not usually a good thing. See http://www.bluebytesoftware.com/blog/2011/11/12/ABriefNoteOnObjectMortality.aspx for details on the drawbacks of having a finalizer that doesn't result in freeing unmanaged resources What you've posted is partially the Dispose Pattern. As someone pointed out there should be a corresponding Dispose(false) in a finalizer ("destructor"). The finalizer should be used to dispose of unmanaged resources. If you don't have unmanaged resources to deal with (i.e. you don't have anything to do when disposing is false), you don't need a Dispose(false) and thus don't need a finalizer. This means thatDispose(true) is the only path, so you don't need Dispose (bool) (and thus don't need to implement the Dispose Pattern) and can move it's body into Dispose (and remove the check for disposing) and just implement Dispose. For example: public void Dispose() { _context.Dispose(); } Classes that don't implement their own finalizer (destructor) are not put on the finalizer list so there's no need to call GC.SuppressFinalize. In general, this is enough if you're creating a class. But, sometimes you can derive from classes that implement this pattern. In which case you should implement support for it in your class (override Dispose(bool) and do the disposing check and Dispose of any managed resources). Since the base class implements IDisposable and calls a virtual Dispose(bool) in its Dispose(), you don't need to implement Dispose() yourself, but you have to call the base's Dispose(bool) in your Dispose(bool). For example: protected override void Dispose(bool disposing) { if(disposing) _context.Dispose(); base.Dispose(disposing); } If you're calling into the base and it's implemented the Dispose Pattern, then you also don't need to call GC.SuppressFinalize() because it's already doing it. You can do the whole disposed thing if you want; I find that it hides multi-dispose bugs though. Peter are you also saying I should ignore GC.SupressFinalize() too? If you don't have a finalizer, there's nothing to suppress. I.e. C# classes without a finalizer (destructor) are not put in the finalization list, and don't need to be suppressed. Peter, thanks for your help - this is all very new, so clear explanations are very welcome. That is only part of the pattern. The other part missing here is that Dispose(false) would be called by a Finalizer. The _disposed state flag can also be used to check and throw ObjectDisposedExceptions in your methods. The full pattern is here Jon Skeet provides good information here, and IMO this pattern is overkill for most situations unless you also have unmanaged resources. If not, just dispose of your managed resources and GC.SuppressFinalize in the Dispose() interface implementation. Use the _disposed flag only if you intend throwing ObjectDisposedExceptions. Jon doesn't really go into a lot of detail there other than "I don't follow this pattern often". Joe Duffy goes into some detail on finalization and the consequences of having a finalizer that doesn't free unmanaged resources here: http://www.bluebytesoftware.com/blog/2011/11/12/ABriefNoteOnObjectMortality.aspx I use the following two forms of dispose based on the class need: Method 1 (For a class with managed and un-managed resources or with derived classes): class ClassHavingManagedAndUnManagedCode : IDiposable { private volatile bool _disposed = false; protected virtual void Dispose(bool disposing) { if (!_disposed) { if (disposing) { //Do managed disposing here. } //Do unmanaged disposing here. } } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); _disposed = true; } ~ClassHavingManagedAndUnManagedCode() { Dispose(false); } } Method 2 (For a class with only managed resource / sealed class / class that has not child classes): class ClassHavingOnlyManagedCode : IDiposable { private volatile bool _disposed = false; public void Dispose() { if (!_disposed) { //Dispose managed objects. _disposed = true; } } } Any child classes of ClassHavingManagedAndUnManagedCode should just follow the protected dispose method pattern and call the base.Dispose at the end of the Dispose method. Also guard all public methods (atleast ones that are using the members that are disposed) using a method /check that throws ObjectDisposedException if the class instance is already disposed. FxCop will always ask you to implement the ClassHavingManagedAndUnManagedCode form of Dispose even if you do not have any unmanaged resources. If you don't have an unmanaged resources you don't need the finalizer. In fact, by having it you've increased the stress on the GC. An object cannot be collected until the finalizer has run so the object is "kept alive" until the system has time to call the finalizer. See my answer for detail on not needing the finalizer and it's consequence on Dispose(bool) See also http://www.bluebytesoftware.com/blog/2011/11/12/ABriefNoteOnObjectMortality.aspx @PeterRitchie I agree with you that we do not need the finalizer if we do not have unmanaged code. Hence I have given two different variants of dispose pattern, one for class with managed code and one without. Also if we do a GC.SupressFinalize, I am not sure if we are adding any stress in GC. You have a bug in your method 2 code, you don't dispose of anything unless Dispose is called twice if(_disposed) {/* dispose managed objects */} _disposed = true; If you do all the cleanup in Dispose(bool) when disposing is true and you have a finalizer (destructor) then you should have a call to GC.SuppressFinalize (either in Dispose() or Dispose(bool) when disposing is true) to reduce the stress on the GC (well, the finalizer--but, without SuppressFinalize you increase the risk of objects staying alive to gen2 which stresses the GC) @Ganesh - thanks for th input it was very much appreciated. I've accepted Peter's answer as it helped me understand a little more about Dispose() but I've voted your answer up as it was particularly useful and will be used again in the future. Is there any scenario in which a class with unmanaged resources requiring Finalize-based cleanup should inherit from a class whose purpose does not center around such cleanup? In every case I can think of, it would be better to wrap the unmanaged resource in its own finalizable object (which could be a private class nested within the type that uses it), so the containing class would not need to worry about cleanup finalization.
STACK_EXCHANGE
You have unsaved changes. If you choose to leave all changes will be discarded. Author: Stevan Milinkovic Last Updated: 2013-04-26 Category: Graphics & LCD Downloaded: 583 times Want to view your JPEG pictures directly from microSD? Now it is possible on EasyMx PRO v7 for STM32 ARM. There is no need for Visual TFT. You even don’t have to know you JPEG files in advance. Just install the library, insert the microSD with your pictures and run your application. Do you want to subscribe in order to receive notifications regarding "JPEG direct load from MMC" changes. Do you want to unsubscribe in order to stop receiving notifications regarding "JPEG direct load from MMC" changes. Do you want to report abuse regarding "JPEG direct load from MMC". |DOWNLOAD LINK||RELATED COMPILER||CONTAINS| |1366975583_jpeg_direct_load_mikroc_arm.zip [10.45KB]||mikroC PRO for ARM|| JPEG direct load from MMC 2012-07-30: version 0.0.0.1 Want to view your JPEG pictures directly from microSD? Now it is possible on EasyMx PRO v7 for STM32 ARM. There is no need for Visual TFT. You even don’t have to know you JPEG files in advance. Just install the library, insert the microSD with your pictures and run your application. All JPEG decoding is done during the image loading process from MMC. There are two demo applications, which are individually enabled or disabled by line #define DEMO_1. If you comment out this line, DEMO_2 will be compiled. In the DEMO_1 application, there are four pictures of different sizes. Demo shows autoscale and autorotation feature of the library. Of course, loading speed strongly depends on image size. So, be patient while loading big images. Limitations: JPEG images up to 2560x1920 pixels (actually a little more, e.g. pictures taken by Canon PowerShot A530 are shown very well). DEMO_2 compares two instances of the same image: one from code memory (as is in TFT example from Mikroelektronika) and another from MMC. Function prototype: int TFT_Ext_Jpeg(FILE* fp) In demo applications, value for fp is 0 because the library uses the already opened and activated file (Mmc_FAT16 file system). Pointer to ANSI FILE structure is for future use and is not implemented yet. Please take a look at the memory usage after compilation of DEMO_1 (that is, without embedded “tiger” picture). You can see that required ROM space is about 20KB and RAM space is less than 5KB. Since the whole library is solely based on MikroC TFT library, it is expected to be portable to the other platforms. The package name is “RAF”. Make sure that library TFT_Ext_Direct is checked. Included JPEG demo files: 320x240.jpg 18 KB 640x480.jpg 55 KB 1280x960.jpg 121 KB big.jpg 2560x1920 pixels 1156 KB tiger.jpg 51 KB Copy these files to your microSD card. Note: Library distribution is only for STM32F107VC 2012-08-19: version 0.0.0.2 Library code is further optimized so memory footprint is smaller than in version 0.0.0.1. This doesn’t affect performance in any way. Now, for DEMO_1 we have: Static RAM (bytes): 4018 and Used ROM (bytes): 20004 (8%). Note that these figures include memory usage for all necessary functions from System, MMC_FAT16, TFT, and dependent libraries. In order to update the library, just replace old jpeg.emcl with new version. 2013-04-26: version 0.0.0.3 (source code) Please feel free to use this source code for whatever you want. I hope that code is readable and that there are enough comments for you to understand it and to be able to port it to other platforms. If you need a “bigger picture” of how it works, please refer to my article: You can get an implementation of the library and demo project for EasyMx Pro V7 for STM32 ARM if you click on [older versions]
OPCFW_CODE
Main / Media & Video / Usb-modeswitch package for ubuntu 10.04 Usb-modeswitch package for ubuntu 10.04 Name: Usb-modeswitch package for ubuntu 10.04 File size: 428mb mode switching tool for controlling "flip flop" USB devices. You have searched for packages that names contain usb-modeswitch in all suites, all sections, and all architectures. Found 2 matching packages. usb-modeswitch source package in Lucid. usb-modeswitch: No summary available for usb-modeswitch in ubuntu lucid. All versions of usb-modeswitch source in. No description available for usb-modeswitch in ubuntu lucid. Binary package “ usb-modeswitch” in ubuntu lucid. Lucid (); usb-modeswitch. 29 Apr Vodafone Huawei K USB modem works perfectly with after installing usb-modeswitch-data (Synaptic Package Manager). Only item I. Installing usb-modeswitch package on Ubuntu (Lucid Lynx) is as easy as running the following command on terminal: sudo apt-get update sudo apt-get. Uninstall usb-modeswitch. To remove just usb-modeswitch package itself from Ubuntu (Lucid Lynx) execute on terminal: sudo apt-get remove usb-. Is usb-modeswitch suitable to be included in next ubuntu release? . (see his blog here) and the Debian package maintainer Didier Raboud. These are the 2 packages related to usb_modeswitch usb-modeswitch In you might have to install your own config file. If you did install. 7 Aug When I try it on my Ubuntu it showed nothing but the software out that I need to install "usb-modeswitch" package, due to a known bug. 2 Oct If you use them with a Windows machine, they act like a USB flash key containing the software and will sudo apt-get install usb-modeswitch Inexpensive n Wifi USB with Ubuntu Lucid LynxIn "GNU/Linux". After several hours of researching 3G USB Linux modem drivers, I found that be switched which apparently can be done using usb_modeswitch package. In Ubuntu I tried installing usb_modeswitch from 2 different. 14 Sep If you are on Ubuntu , If your modem detected very well, *but*, if it's . i have already installed the usb-modeswitch packages(both data. floristmawar.com?ke usb-modeswitch April ; . a lot has changed since then the latest version of ubuntu 22 May sudo apt-get install usb-modeswitch. For the command-line challenged here is a quick screenshot on how to do it using Synaptic Package.
OPCFW_CODE
Android : Application with Crash dump report I want to develop a log frame work for my application .The things i want to achieve is Trace the log of my application Write the log to a text file while tracing the log Filter the log genrated by my application Stop the log trace if any error/exception happens after writing the exception to text file First of all i want to know is this possible ?? I guess by using service we acheive this . if i am wrong please correct me I refer this project for achieving my needs https://github.com/androidnerds/logger In this project they are using AIDL to create a service to record the logs .But the saving of file occur only when ever the intent for that . Seriously i'm new to this AIDL process . The point that confuses me is the sample project doesn't give any permission in manifest to WRITE FILES to storage .But it's able to do that . How did they achieve that?? Even i had gone through these questions How do I get the logfile from an Android device? Programmatically get log cat data Write android logcat data to a file Save Data of LogCat in android How to save LogCat contents to file? How to write entire Logcat in to sdcard? Filter LogCat to get only the messages from My Application in Android? But nothing working for me. So please suggest a way to achieve this what have you done so far? did you play with Logger/Handler classes? well i tried to customize the source code of the project i refer @pskink i try to modify the source code of https://github.com/androidnerds/logger why? this is a logcat viewer, not a logging utility... @pskink ok so can you guide me what should i do ? can i use the approach they used ?? I mean by creating AIDL file to get logs read docs of Logger class and if you want some custom logging use addHandler() method I suggest to take a look at ACRA. https://github.com/ACRA/acra At least if you don't want to use it, it's still open source so you can take a look at the code. They do a lot of thing you want to achieve and when looking at the code you may find answers to your question. I forked ACRA to make some of the things you mention in your questions : How do I get the logfile from an Android device? You can log into the acra report file everything you want. We use a special logger that write into the acra logfile. So we know what is the sequence of events that triggered the crash. Programmatically get log cat data It's a text file, we use [sections] and timestamp/key=value to display logs to the users before sending the report. Be careful to anonymize your log reports and do not put personnal informations about your users. Write android logcat data to a file Save Data of LogCat in android How to save LogCat contents to file? How to write entire Logcat in to sdcard? Filter LogCat to get only the messages from My Application in Android? Everything that you ask here is already managed by ACRA. There is an extension mechanism that allows you to write your own sender and your own reporter.
STACK_EXCHANGE
M: Facebook's $5B FTC fine is an embarrassing joke - moltensodium https://www.theverge.com/2019/7/12/20692524/facebook-five-billion-ftc-fine-embarrassing-joke R: elpool2 I don't see how people are concluding it's a slap on the wrist that won't change their behavior. FB is basically just guilty of lax security practices. The fine doesn't need to be a large percentage of their total revenue to affect change. It just has to be more than the cost of investing in better security practices. Its not like Cambridge Analytica and the other data leaks made any profits for them. R: austhrow743 >Here's another way to say it: the biggest FTC fine in United States history increased Mark Zuckerberg's net worth. What a garbage conclusion. The article even draws attention to the market being aware of an upcoming fine, and makes no argument for Zuckerberg's net worth being higher than if this fine was never on the table. R: appleiigs The fact that the stock went up doesn't necessarily mean the $5B is too small. That is a naive understanding of stock valuations. It could go up just because the uncertainty is now removed. R: _bxg1 Key points: \- This is the largest FTC fine in history \- It's 1 month's revenue for Facebook \- After news broke, Facebook's stock went _up_ R: easytiger It's also still not clear to me what the fine was for. What did Facebook do incorrectly and what law did they break? R: pgnas Facebook gave away user data without permission, the very same thing it continuously promised to not do. Now that they know where the bar is placed, I am sure they will up the ante.. Just the price of doing crooked business. R: easytiger No. I'd like to know on a technical level what was done. Anyone can scrape this information on users, they're friends and so on asking as persons made it public. The government does this at scale R: skybrian There are probably a lot of people at Facebook who work on things that are worth less than $5 billion. As a motivating factor in some manager or lawyer's PowerPoint presentation arguing for a new company initiative, it seems like this will be a pretty good argument? R: wfbarks The stock has taken a beating over the past 6 months. Its a classic buy the rumor sell the fact, but in reverse.
HACKER_NEWS
Since I began study of te reo Māori a bit over a year ago, I’ve enjoyed watching Māori language TV programming. But it hasn’t always been enjoyable. I first used a VPN connection from my phone, but, well, it’s my phone. I was able to accomplish the same thing on my computer, and also used chromecast to stream the programming from my phone to our TV, but that neither satisifying nor elegant. I also experienced buffering and less-than-optimum video quality on our TV. The Easy Ways There are three relatively easy ways to accomplish this. I’ll only provide a brief overview of each and not many details as there are too many variable that depend on what devices you want to use too receive the programming and watch them on. - If you want to watch on an Android or iOS device, or on a computer, you can simply subcribe to a VPN (virtual private network service like HotSpot Shield. HSS is the first and only VPN service I tried, and I’m quite happy with them. First you need to establish and account on their system. Second, you download their app on to your device, login, select which country from which you want to appear being logging in. Then launch the streaming application for the servicess that you want to watch. Easy-peasy. - If you are watching the programming on your Android or iOS device and you want to have it play on your TV, you need to use either Airplay or Chromecast to accomplish this. I cannot explain all of the ways this could be accomplished as there are too many variables, and there are plenty of instructions for doing this on the web. - If you have multiple devices that you would like to watch this programming from, you can get a wi-fi hub. I bought a GL-AR750S-Ext which worked very well for that purpose. It is a bit more difficult to get set up and you have to log in via your computer or phone to activate or turn of the VPN service, but worked very well for me otherwise. I have another wi-fi hub that services most of our home’s devices, and this was the secondary one so that I could access Aotearoa programming from any device connected to it. Be aware that anything connected through to the internet will appear (to anything you connect to) to be coming from an IP based in Aotearoa, which could affect your ability to watch services based in the US. Once I forgot I was connected the GL-inet hub, and could not find a movie on Netflix that my wife had been watching on our other TV. It took me a bit to realize that I was still connected to the Aotearoa VPN, and apparently that movie was not available in Aotearoa. I received Walmart’s new and inexpensive Onn Android TV device as a birthday present (since returned) and shortly thereafter received a free Google Chromecast stick for being YouTube TV subscribers. Both run Android TV. But Android TV does not allow you to install many Android phone apps, and among those omissions were Māori TV and other streaming services I wanted to access. I learned how to root these devices and sideload the apps using their .APK files, but Māori TV crashed on both of them whenever I launched them. The big issue here is this: Android OS and Android TV are two different operating systems. Android OS was designed to work with phones, and Android TV within TVs or devices specifically designed to work with TVs. So some apps that work on Android OS (for phones) don’t work on Android TV. It makes no sense to me that they can’t but I’m sure that the developers have their reasons (and that they are pretty lame). With all of these devices, I was using HotSpotShield as a VPN (virtual private network). Many online streaming services have their services restricted geographically, so in my case I could not view some of the Māori language programming I was wanting to watch. There are over a hundred countries that you can select, and servers will think that you are physically located in those places, and fortunately Aotearoa is one of them. The Hard Way (But I Like It) I next began to look for a device that could run Android OS (not Android TV). There are a number of systems that can do this but after some research I opted for a Raspberry Pi 4 kit from CanaKit. You cannot install and run Android OS apps using Raspberry Pi’s native OS, so you need to install and run KonstaKANG‘s build of Lineage OS in order to do this. It was modified by to run on Raspberry Pi devices. This is not for the faint of heart. I’m not a command-line expert, but felt confident enough after reading the instructions to give it a try. Rather than use the Mini SD card that came with my kit, I used a spare 64G card I had lying around (in case I needed to return the kit). There are some pretty comprehensive instructions for installing KonstaKANG’s build of Android for Raspberry Pi 4 on their site, but I did have some issues with it (more due my misunderstanding and not because of faulty instructions). I had better luck following this YouTube video by leepspvideo: I was able to get KonstaKANG’s build of Lineage OS and Google Play Store installed. Māori TV and other Android apps have also installed well from within the Play Store. The biggest issue I have encountered so far is that YouTube TV crashes on launch. There is a HotSpotShield Android app for Android OS, so I am able to go in and enable my VPN whenever I want to access whatever streaming service on Rasperry Pi 4. Previously I was using a VPN ready wifi hub. Now I can just activate the VPN connection through the Android device, fire up my app and stream away. Geekly lesson learned the hard way: when you go into the HotSpotShield in the Android app, you cannot select the country by clicking on the name. You have to use the arrow buttons, move into the country column by using the right arrow button, scrolling down the list with the down arrow button, and then return to select the country you want. This whole process is not as elegant as I would like it to be – yet. I have to go into the command line to shut down, but I’m sure there is an app for that. I also have to unplug the Raspberry Pi power cable and plug it back in, but am looking for an elegant solution for that as well. Update 7/15/2021: I didn’t enjoy getting up and using my mouse and keyboard to change channels or video, so I picked up this little gem for $20 – a Rii wireless USB keyboard with trackpad. It took a while to figure out what did what, but it’s been working great! Kia kaha te reo Māori! PS: Pae Rāhipere is Māori for Raspberry Pi.
OPCFW_CODE
Enabled/Visible Conditions are two model classes by which relationships can be linked to element types, actions, properties, tools, or dialog input fields. These conditions determine when the execution of an action or tool is permitted or when an input dialog field is editable, grayed out, hidden or when a property is visible in the Eclipse Property view. When modeling an enabled condition for an Action_Has_Tool relationship, the administrator defines the permitted values for certain properties, so that the tool is executed within the action. By doing this, simple and complex rules can be defined, where complex rules link the simple rules via operators. Simple enabled condition example: A complex enabled condition contains any number of simple and complex enabled conditions, and links them via operators. The visible condition defines whether an action, property or dialog input field is shown in the corresponding view, context menu or dialog. You can specify simple and complex rules for the condition in the same way as for enabled conditions. The following table shows the relationships which enabled or a visible conditions can be defined: |Relationship||Enabled Condition||Visible Condition| |Application Options has Application Action||Yes||Yes| |Global Action has Workbench Action||Yes||Yes| |Global Action has Editor Action||Yes||Yes| |Element has GetChildren Action||Yes||No| |Element has Action||Yes||Yes| |Element has Property||No||Yes| |Element List Structure has Action||Yes||Yes| |File Descriptor has Action||Yes||Yes| |Action has Tool||Yes||No| |Tool has Input Parameter||Yes||Yes (modeled dialog)| |Text Decoration has Decoration Value||No||Yes| If both an Enabled and Visible condition is defined for a relationship the two conditions are evaluated in the following way: |Selection_Count [O]||String|| Number of resources that must be selected for the referenced action to be executable. * = Number of resources is irrelevant 0 = No resources may be selected n = Exactly n (n = 1,2,3,...) resources must be selected [*|n]-[*|n] = A range of numbers, for example, “2-*” = at least two resources must be selected. Default value: * |Complex_Enabled_Condition (0..1)[D]||Complex Condition||Complex condition. Consists of several combined conditions.| |Simple_Enabled_Condition (0..1)[D]||Simple Condition||Simple condition. Checks a property value for a specific value.| The enabled condition for an element action is specified here. It begins with a complex enabled condition (green dashed framed), which links the two simple enabled conditions (red framed) with the AND operator. AWM interprets this model as follows: Action ACT_X from ElementType ELE1 can only be executed if the property value of PROP_Protected is false and corresponds to the property value of Property PROP_Group “TEST”. In addition, exactly one Element must be marked. A simple enabled condition checks a property value for a defined value or status, and activates or deactivates the action depending on the result of the check. If the check is done during the execution of an action, for example, by a Tool Enabled Condition, a property value is first searched for among output parameters of the previous tools of the action and then in the context of the selected element(s). |TargetID||Property||Reference to the property whose value should be checked.| |Operator||Selection|| The check operator. Valid values include: |Value [D]||String||A fixed value that is compared with that of the Property.| The meanings of the different operators are shown in the following table: |Equals…||Checks whether the property value of the referenced property has the same value as specified in "Value".|| The action can only be executed if the selected element is a COBOL program: TargetID = PROP_Type Operator = Equals… Value = COBOL |Equals not…||Checks whether the property value of the referenced property has a value different from the specified "Value".|| The action can only be executed if the selected element is not in production: TargetID = PROP_Stage Operator = Equals not… Value = PROD |Regular Expression|| Checks whether the property value of the referenced property matches with the regular expression specified in "Value". This check is not case sensitive. | The action can only be executed on COBOL files: TargetID = PROP_FileSuffix Operator = Regular Expression… Value = (cbl)|(cob) |NULL||Checks whether the referenced property has no property value.|| The action can only be executed if the selected element has no access key: TargetID = PROP_AccessKey Operator = NULL |NOT_NULL||Checks whether the referenced property has any property value.|| The action can only be executed if the selected element has a change date: TargetID = PROP_ChangeDate Operator = NOT_NULL |TRUE||Checks whether the referenced property has the Boolean value true. The valid value for true which is checked during run time is defined in the property definition.|| The action can only be executed if the selected element can be edited: TargetID = PROP_Editable Operator = TRUE |FALSE||Checks whether the referenced property has the Boolean value false. The valid value for false which is checked during run time is defined in the property definition.|| The action can only be executed if the selected element is not write-protected: TargetID = PROP_Protected Operator = FALSE
OPCFW_CODE
In the realm of gaming, technological advancements continually redefine user experiences and the overall landscape. Deep Brain, an AI powerhouse, has been instrumental in revolutionizing gaming technology, introducing innovative solutions that have reshaped the gaming industry’s dynamics and player interactions. The Intersection of AI and Gaming Technology AI-Powered Gameplay Enhancements Deep Brain’s AI innovations have transformed gameplay. The platform’s algorithms facilitate adaptive gameplay mechanics, providing personalized experiences, dynamic challenges, and real-time adjustments tailored to individual player preferences. Immersive Virtual Environments Deep Brain creates immersive virtual worlds. Through AI-generated content and simulations, the platform enables the development of lifelike environments, fostering heightened immersion and a more engaging gaming experience. AI-Driven Game Development Deep Brain pioneers AI-driven game development. Its technology streamlines game creation processes, automates design iterations, and offers predictive insights, empowering developers to create captivating and unique gaming experiences. Deep Brain’s Impact on Gaming Innovation AI-Enhanced Graphics and Rendering Deep Brain revolutionizes graphics and rendering. Through AI-powered algorithms, the platform elevates graphical fidelity, optimizes rendering processes, and enhances visual quality, pushing the boundaries of realism in gaming. Dynamic AI-enabled NPCs and Non-Linear Storytelling Deep Brain transforms NPC interactions and storytelling. Its AI algorithms enable non-linear narratives, dynamic character behavior, and responsive NPC interactions, contributing to more immersive and engaging storytelling experiences. AI-Infused Game Mechanics and Balancing Deep Brain refines game mechanics and balance. The platform’s AI analyzes player behavior, adjusts difficulty levels, and fine-tunes game mechanics in real-time, ensuring optimal gaming experiences for players of varying skill levels. Deep Brain’s Role in Gaming Advancement Personalized Gaming Experiences Deep Brain tailors gaming experiences. By leveraging AI-driven insights, games adapt to individual player behaviors, preferences, and skill levels, providing personalized challenges and content. AI-Powered Game Testing and Quality Assurance Deep Brain enhances game testing and QA processes. Its AI algorithms expedite testing cycles, identify bugs, and refine game performance, ensuring higher-quality gaming experiences before launch. AI-Driven In-Game Assistance Deep Brain offers AI-based in-game assistance. The platform’s technology provides real-time hints, tips, and assistance to players, enhancing gameplay and supporting progression. Deep Brain’s Fusion of AI and Gaming Potential AI-Driven Game Customization and Modding Deep Brain enables game customization and modding. Its AI-powered tools empower gamers and developers to create custom content, mods, and expansions, fostering a vibrant gaming community. AI-Powered Real-Time Analytics for Game Optimization Deep Brain optimizes games through real-time analytics. Its AI constantly monitors player behavior, collects data, and offers insights to developers, aiding in ongoing game improvements and updates. AI-Enabled Dynamic Game Worlds Deep Brain’s AI shapes dynamic game environments. Through predictive algorithms, games evolve and adapt in real time, creating ever-changing worlds and experiences for players. Embracing the Future of AI-Infused Gaming Deep Brain’s commitment to pushing the boundaries of AI-powered gaming technology remains unwavering. As AI continues to evolve, the gaming industry stands on the brink of transformative changes. With Deep Brain’s innovative solutions, the trajectory of gaming is poised to include hyper-personalized experiences, AI-assisted game design, and immersive virtual worlds that seamlessly adapt to individual preferences in real time. Embracing this synergy between AI and gaming technology opens a realm of endless possibilities, where player immersion, creative expression, and technological innovation converge to shape the future of interactive entertainment in unparalleled ways. Conclusion: Deep Brain’s Evolutionary Impact on Gaming Technology In conclusion, Deep Brain’s AI-driven innovations have transcended traditional gaming paradigms, pushing the boundaries of what gaming technology can achieve. By integrating AI into gaming ecosystems, Deep Brain has not only enhanced player experiences but has also revolutionized game development processes. The platform’s ability to personalize gaming experiences, optimize game mechanics, and foster continuous innovation has propelled the gaming industry toward a future where AI plays an integral role in creating immersive, engaging, and dynamic gaming worlds. As Deep Brain continues to refine its AI capabilities, it remains at the forefront of gaming technology, offering a gateway to a new era of gaming experiences defined by innovation, immersion, and limitless possibilities.
OPCFW_CODE
Exceptions Renderer Listener causing memory leak Laravel Version 11.20.0 PHP Version 8.3.10 Database Driver & Version No response Description The linked file from the linked PR is causing a memory leak: https://github.com/laravel/framework/pull/51261/files#diff-dd48a4b628c66b2c2388be3d8f6c63911e5cef6b1bfb483fe8c36739087ce40f (#51261) Why? This listener class implicitly listens to query execution events, and then - despite me not opting into logging queries - this class stores executed queries in it, though limited to 100. This is bad when doing many bulk updates in an (ETL) loop, because it stores large bulk insert SQL queries in memory, causing high memory usage. OK, to be fair, this only happens if Debug mode is on. But I did spend a bunch of time debugging why memory usage was high while developing an ETL tool locally, because I didn't know there was a recent code change that kept query logs that I didn't opt into. Steps To Reproduce foreach ($lazyCollectionWith100000Values->chunk(1000) as $rows) { \DB::table('some_table')->insert($rows->toArray()); } // 100 x 1000 rows worth of values are stuck in memory. As it's mention here you can disable the logging and enable it after the bulk insert. this will result in reduced memory usage and not throw the exception. Yes, assuming you didn't create any event listeners that you did actually want to use If you still need to have event listeners on this exact queries you can do the setEventDispatcher referencing which event listeners to have. Pretty much excluding the one you're getting the exception and including your desired one. But I may be seeing this wrong :) Hi @amcsi. Since you mention this only happening with debug mode I don't consider this a huge issue. @hugoboss17 also gives a workaround (thanks!). The workaround isn't perfect, because maybe one would want to listen to events in a custom listener. But at least I wanted to make sure there was an Issue for this so that if in the future anyone would get confused about high memory usage locally or on a dev server, they would save time by finding this Issue by googling. So for projects where you don't need to listen to any DB events at all, this is the workaround in the boot() method of any service provider: \DB::connection()->setEventDispatcher(new \Illuminate\Events\Dispatcher()); This is better than \DB::connection()->unsetEventDispatcher();, because otherwise some Feature tests would fail. In the event that you do want to listen to other DB events, you can create a new override class for the high memory usage listener class: <?php declare(strict_types=1); namespace App\Exceptions; use Illuminate\Database\Events\QueryExecuted; use Illuminate\Foundation\Exceptions\Renderer\Listener; /** * Override of Laravel's exceptions listener. */ class ExceptionsListenerOverride extends Listener { public function onQueryExecuted(QueryExecuted $event) { // (...Or you could have a condition here to not return, but rather call the parent, if you do want the log-to-memory behavior for the exceptions HTML page.) return; } } ...And then use that overridden class this way in any service provider's register() method: $this->app->bind(Listener::class, ExceptionsListenerOverride::class);
GITHUB_ARCHIVE
Type Control Panel in the search box and click Control Panel. This way, a RW disc can be erased and reused many times before it should become unreliable. However, unlike the SDD, I did not encountered any information leading to think that how to overwrite a rewritable cd&dvd writer may be more or less efficient than other to erase a rewriteable disk content. If the comparison is identical, LCD will show the message as follows. A mini window pops out, on which you can edit the partition label and choose a file system. This format is used by DVD-Video discs. It is is a sort of virtual folder that allows keeping links to frequently used files and accessing them as if they were real files and not just links. Select the file system and set the cluster size. Overburn lets you decide whether to copy beyond the limit of the media or not. HP recommends that you do a full erase as it deletes all the files and creates a new empty registry. VAT support was extended. While from a pure theoretical point-of-view one may still imagine some tiny weakness exploitable by a large and well-founded organization, as a reminder we are here in the scope of data clearance and not sanitization. Also if this setting is enabled, ImgBurn no longer tells the previewer to insert a 1 second pause between cells. The data on a re-writable disc can be erased using either the standard Windows Explorer or most data burning software applications. Do you want to verify the hard disk or just a partition. Overburn may cause a DVD writer to be damaged and data incomplete. Asus makes software called E-Hammer that may work with non-Asus drives: Has read-only compatibility with UDF 2. A normal cell usually contains several VOBUs, unless it's a very short cell such as a single frame. If the file is. Other CD file system formats are described here: There are many multi-angle examples in Disney DVDs. Indeed, when asking a software to erase a disk, all it will do is send an ATAPI BLANK message to the disk writer's firmware, and it will be up to the firmware to handle actual data erasing and send back the operation result to the software. New Password When it is correct, you will see the following figure. Try all these tricks and tips… How to Partition Dynamic Disk. Sanitization is the process of removing the data from media before reusing the media in an environment that does not provide an acceptable level of protection for the data that was in the media before sanitizing. When the erase operation is complete, this disc is then ready to re-use. I also strongly suspect that a lot of end-user software may only propose this mode. A format that allows data to be written to the disc only once. Also supports command-line commands "mkdir", "del" and "rmdir". The system provides two options here and it by default it selects the latest image that was last taken but if you wish to select the system image of your choice then you can use the second option which allows you to manually select the system image. Two steps will help you change hard dis… Related Products. This build adds an extra Sparing Table in order to manage the defects that will eventually occur on parts of the disc that have been rewritten too many times. A "maximum write" revision additionally records the highest UDF support level of all the implementations that has written to this image. There three ways in which you can recover the Windows 7 hard disk partition and other partitions they are: The system cannot find a hard disk. Allows users to connect to websites via Total Commander and see links, pictures or any downloadable links as files. If there is data on the CD or DVD, you should first erase the data already on the disc and then format it for reuse. The partition is damaged. Select Continuebutton to begin the erase operation. However, it will eventually become unreliable with no easy way of detecting it. Universal Disk Format (UDF) is a profile of the specification known as ISO/IEC and ECMA and is an open vendor-neutral file system for computer data storage for a broad range of media. In practice, it has been most widely used for DVDs and newer optical disc formats, supplanting ISO Due to its design, it is very well suited to incremental updates on both recordable and (re. DVD-R or DVD-RAM discs work well for storing large files such as movies and backup files. Unlike rewritable DVD discs, DVD-R discs typically are suitable for burning information a single time. However, you may be able to delete items from a DVD-R disc, if necessary. Technically yes, assuming you meant a burned CD/DVD and not a pressed one. For the longer answer see What prevents a CD-R from being rewritten?. 4. Click the restart button to restart the computer and to start the recovery or restoration process. Now the computer will reboot into recovery mode you can start the recovery process, for further detailed information refer to the instructions below. Insert the rewritable disc, such as a CD-RW, DVD-RW, DVD+RW, or DVD-RAM disc, into your computer's CD, DVD, or Blu-ray Disc burner. 2. Open Computer by clicking the Start button, and then clicking Computer. Erasing a CD is a simple process, but it can only be done with a CD-RW disk. The data on a CD-R is permanently written on the disc. You are being eco-friendly when you use a CD-RW, because you are reusing a disc rather then burning a disc that will later be thrown out when you are done with it.How to overwrite a rewritable cd&dvd
OPCFW_CODE
This web-page is a guide to getting & installing the TENCompetence WP6 Learning Design Services for Coppercore Service Integration The contacts are Dai Griffiths and Paul Sharples. Version 2.0.0 Final. Last updated 10 February 2010. This download is a modified version of the current CopperCore Runtime Environment (CCRT). It has been modified to include the updated SLeD player and the Widget Server. Since version 2.0.0 this download also includes the Astro player. Beta Cross platform download In order to use the bundled "Astro" application, you will need to use the linktool or sledadmin to create users and runs as you usually would. (See the "quick start guide for SLeD and the linktool" link below for information on how to create runs and users) Once you have done this you should point your web browser to http://localhost:8080/astro. You should then be prompted to login to Astro. What is this tool for? Widget Engine Design Philosophy How do I use a widget in my Learning Design? To use one of the widgets made available from the widget server, you must add a parameter to an existing service element found within an environment. See here for further details. You will need to download the following zip file from: http://coppercore.sourceforge.net/downloads.shtml. CopperCore Runtime Environment (coppercore_ccrt_3.1.zip). Unzip this downloaded zip file into the parent folder where the following other modules will also be situated. The Source code is stored in a SourceForge CVS repository. The details are: Repository path: /cvsroot/tencompetence Connection type: pserver The modules needed are: wp6/org.tencompetence.jboss.mbean.db (optional, but recommended. This module allows CCRT to be patched so that it can run the widget database using a combination of HSQL and an mbean auto-loader, without having to set up an external database) IMPORTANT: check these modules out so that each project is in the same folder as CCRT. This is because the ant build scripts expect CCRT to be in a certain location. If you wish to check out the modules differently, it will be up to you to modify them accordingly. For development work it requires Eclipse 3.3. You will also need to follow the instructions found at the root of the "org.tencompetence.widgetservice" cvs project in "readme.txt" in order to build everything successfully. Scorm Integration work This is now part of the above installation. Resources and more detailed information however, can still be accessed here. Please note that the SCORM functionality is only currently supported in the SLeD Player and not Astro. If you are having problems please email email@example.com.
OPCFW_CODE
namespace TinyOData.Query { using System; using System.Collections.Specialized; using System.Net.Http; /// <summary> /// Class that represents the parsed query string with properties used to access /// parts of the query string /// </summary> public class QueryString { #region Public constants public const string ODataTop = "$top"; public const string ODataSkip = "$skip"; public const string ODataOrderBy = "$orderby"; public const string ODataFilter = "$filter"; public const string ODataSelect = "$select"; public const char ParameterDelimiter = '&'; public const char KeyValueDelimiter = '='; public const char PropertyDelimiter = ','; #endregion Public constants #region Private fields private readonly StringDictionary _stringDictionary; #endregion Private fields #region Constructor /// <summary> /// Creates a new <see cref="QueryString"/> instance from a string /// </summary> /// <param name="queryString">The string to parse</param> public QueryString(string queryString) : this(new Uri(queryString)) { } /// <summary> /// Creates a new <see cref="QueryString"/> instance from a <see cref="Uri"/> /// </summary> /// <param name="uri">The <see cref="Uri"/> to parse</param> public QueryString(Uri uri) { _stringDictionary = new StringDictionary(); NameValueCollection rawParsedPairs = uri.ParseQueryString(); foreach (string key in rawParsedPairs) { string value = rawParsedPairs.Get(key); _stringDictionary.Add(key.Trim(), value.Trim()); } } #endregion Constructor #region Indexer public string this[string key] { get { return this._stringDictionary[key]; } } #endregion Indexer #region OData query key-value pairs public string TopQuery { get { return Top != null ? string.Format("{0}{1}{2}", ODataTop, KeyValueDelimiter, this._stringDictionary[ODataTop]) : null; } } public string SkipQuery { get { return Skip != null ? string.Format("{0}{1}{2}", ODataSkip, KeyValueDelimiter, this._stringDictionary[ODataSkip]) : null; } } public string OrderByQuery { get { return OrderBy != null ? string.Format("{0}{1}{2}", ODataOrderBy, KeyValueDelimiter, this._stringDictionary[ODataOrderBy]) : null; } } public string FilterQuery { get { return Filter != null ? string.Format("{0}{1}{2}", ODataFilter, KeyValueDelimiter, this._stringDictionary[ODataFilter]) : null; } } public string SelectQuery { get { return Select != null ? string.Format("{0}{1}{2}", ODataSelect, KeyValueDelimiter, this._stringDictionary[ODataSelect]) : null; } } #endregion OData query key-value pairs #region OData query values public string Top { get { return this._stringDictionary[ODataTop]; } } public string Skip { get { return this._stringDictionary[ODataSkip]; } } public string OrderBy { get { return this._stringDictionary[ODataOrderBy]; } } public string Filter { get { return this._stringDictionary[ODataFilter]; } } public string Select { get { return this._stringDictionary[ODataSelect]; } } #endregion OData query values } }
STACK_EDU
It is not even a debate. Not having them will destroy your soul. I have seen it happen time and time again. Companies going back in time decades, some closing shop, I.T. departments F.U.B.A.R. and peoples lives destroyed. The non-corporate world equivalent is people losing their digital archives of irreplaceable digital memories forever; photographs, videos, etc. Gone. Forever. On good days your focus could not be further away from thinking about losing data. Things are happening. The world is evolving, your systems are working, and the world is your oyster. On bad days disks fail, data becomes corrupted, you are under attack by some evil hacker organization. Having a well-thought-out backup plan makes your bad days a nuisance, one undesirable change of plan. But, bad as it is, it will not be a catastrophic failure with dire consequences. The RAID and other data replication schemes misconception. Having a system that replicates data against hardware damage of some of the drives is excellent. You can switch to alternative media in failure scenarios and keep your system operational while replacing faulty hardware. It deals with the integrity and failures of your physical layer (hardware). While under attack or dealing with nasty a bug that slowly (but surely) corrupts your data, it is useless. Automation will ensure that the original and the replicated data are corrupted equally. So you will end up with two perfect clones of the problem and no way to get back to a previous version. Replication systems are NOT a backup solution. Instead, they address and deal with a specific use case; hardware failure. You need one immutable, point-in-time backup of critical data that you can revert to. Snapshots and incremental backups Cloud platforms and Virtual Machines have point-in-time backups called Snapshots. They store the differences between data in two points in time, and you can easily navigate to a previous version of your system. You can usually automate snapshot creation in ways that fit the ever-changing needs of any particular system. Usually, scheduling is tightly related to the speed at which data changes by design, sometimes tied to logical business cycles. Opening and closing the daily activity in a store, catalog updates, price changes, etc. The need for offline remote location copies and immutable data There is no safety whatsoever in having incremental backups if they can be destroyed along with the system they are designed to restore. I have seen ransomware attacks that ended up with backup files held hostage along with everything else. An entire building was destroyed by a fire. These things do happen. You must keep replicas of the incremental backups, updated regularly on a remote location, preferably offline. This way, you know for sure that it is untouched and ready to be used if the need arises. Backup restoration is not optional. I cannot recount how many times I have witnessed puzzled looks on people’s faces when they find out their backups are useless. They simply cannot restore them. When they find out about their predicament, people’s expressions soon turn pale and give way to shock and horror. You must test the restore procedures (regularly) to ensure they are (and remain) viable and know how long it will take to recover from disaster. That’s where your confidence should come from. Knowing the outcome. Not from the illusion of supposedly having done everything right, only to find out that, in fact, you didn’t. Do the work. You will get the benefits someday, at the same juncture where most people are hit by drama. Having backup plan Effective backup plans vary drastically depending on the systems and data they protect. There is no silver bullet solution to fit all systems and problems. They must answer the following questions: - What data needs to be backed up. Some systems have primarily data in transit that can be rebuilt in run-time from the source. There is no point in backing it up. You should have data backed up at its inception already, and the system will populate it on the first run after restoration. - What system or technology will perform the backup operation? - Where will backup data be stored? - What is the procedure to have an offline copy? - How will it be transported to a secure remote location? - Are there risks related to data in transit? - Will backups need to be encrypted? - How much will backups cost over time? - How often will backup operations be performed? - How often are backups validated and restore-tested? - How long will backups be retained? - Who has access to backup data? - Who will have access to the encryption keys that protect backups if encrypted? - Who is responsible for testing backup restore procedures? - Who executes backup restoration in real-world scenarios? - Who is in charge of documenting the processes? Use and protection: - What are the emergency procedures? - What are the specific events the backup strategy is designed for? Over the years, I have seen too much pain and sorrow over essentially poorly designed or nonexistent backup plans. Don’t be that person that fails. Be better than that.
OPCFW_CODE
As we mentioned at the last release, as part of integrating the Hallmarks of Cancer feature, the Cancer Gene Census (CGC) has had a thorough re-evaluation. What's new in v82? The August updates include the completion and launch of the restyled website. The full curation of four new genes: BTK, DROSHA, EPAS1 and KEAP1, a substantial update to SMAD4 and a new fusion pair NUP214-ABL1. Full details here. The restyled website is complete and we have now switched it over as the main website with the old COSMIC site becoming legacy. The changes have been designed to give the website a modern look and feel, and to allow for the incorporation of new tools that are under development for COSMIC, such as the Hallmarks of Cancer feature. Full details here. The Cancer Gene Census has had a thorough re-evaluation as part of integrating the Hallmarks of Cancer feature. This feature, initially released in May, has now had a substantial update to include 226 genes. Full details here. Would you be interested in doing some user experience testing for COSMIC? As part of an ongoing update program, we are encouraging people who regularly use COSMIC resources to help us ensure that our systems are as user-friendly and helpful as possible. Full details here. It is the nature of maintaining a large database that there is a degree of turnover as mutations are reclassified and new information comes to light. As a result of our continuous efforts to ensure we provide the latest and most accurate information, we've recently received a number of enquiries regarding ‘missing’ mutations. Full details here. Have you explored the new beta site yet? We have just pushed updates to the cancer browser and sample pages so they match the new style. Full details here. The current COSMIC website is starting to show its age and at the August release we will be switching to the beta site that we have been developing over the last few months. This will mean that with COSMIC v82 the current beta site becomes the live site, while the current live site will be relegated to legacy status. Full details here. We are looking to recruit a talented cancer genetic scientist to join the COSMIC team, based at the Sanger Institute, Cambridge, UK. You will take a leading scientific role working on a new project defining the genetic drivers of cancer. Find out more here. COSMIC will be presenting at the Exploring Human Genetic Variation workshop series aimed at introducing users to data resources and tools developed by EMBL-EBI and the Sanger Institute. This two day series is based at EMBL-EBI at the Wellcome Genome Campus on the 18th and 19th of July 2017. Full details here.
OPCFW_CODE
Update mxnet in maven timely? in the maven repository, the mxnet version is much delayed. Furmore, it depends on opencv, in which the case our server doesn't need opencv. @javelinjs Let's release static build on maven? Sure. I'll work on this. BTW, are we going to change package name from ml.dmlc to org.apache? cc @mli @javelinjs let me know if need any help on this. @szha Thanks for invitation to the deployment project. @javelinjs @szha @piiswrong Thanks for your paying attention to this issue. Another question about scala version is: NDarray.toArray method could be block on cpu version mxnet? This method will block me for about 10 seconds each time I run it on the Centos server, which could be neglected when I run it on my Mac cpu only. I Read the code if (this->ctx().dev_mask() == cpu::kDevMask) { **this->WaitToRead();** RunContext rctx{this->ctx(), nullptr}; ndarray::Copy<cpu, cpu>(this->data(), &dst, Context::CPU(), Context::CPU(), rctx); } I guess it is blocked here. I don't no why we have to wait to read, if there is something we don't have to wait to do this copy? @maxenceliu the same problem as you mentioned in https://github.com/apache/incubator-mxnet/issues/3724 ? @javelinjs Yes. I haven't located the problem. Do you have some suggestions? @javelinjs @piiswrong @szha I think I have partly solved this problem. But I still don't know the reason. After I replace FeedForward with Module, there is no halt anymore. Is there anything different between Module and FeedForward? Since predict methords are both implemented in these two class. @maxenceliu I also use FeedForward in v0.7 and failed to upgrade because of the same problem. I would try Module later. Given that this thread has digressed a little, @javelinjs Do you need any help on the original issue ? " Update mxnet in maven timely? #7417 " @javelinjs what are the next steps here and do you need any help? Now that mxnet is a apache incubator project I do think we need to change the package names prefix from ml.dmlc to org.apache.mxnet. That is what all other apache projects have done. Also, going forward I think we should automate the "publish artifacts to maven central" step that executes whenever a new release is cut. Lets talk about how to get there. I'm working on the deploy repo. Following the py package, I shall make it work this weekend. Once it is stable, we will go on to change the package name. One thing I'd like to raise is the PGP key, now I'm using my private key for maven deploying. Do we have a key for org.apache? @javelinjs Thanks. This page has some info regarding releasing maven artifacts for a apache project: http://www.apache.org/dev/publishing-maven-artifacts.html In particular, the project needs to be configured in Nexus. For this we need to open a JIRA: http://www.apache.org/dev/publishing-maven-artifacts.html#signing-up @apache/mxnet-committers: This issue has been inactive for the past 90 days. It has no label and needs triage. For general "how-to" questions, our user forum (and Chinese version) is a good place to get help. @yzhliu Any update on this? Its in Maven now, we'll be releasing regularly with every release. https://mvnrepository.com/search?q=org.apache.mxnet
GITHUB_ARCHIVE
Value of the last location followed in PHP Curl? I did a search on the site before asking this question, but I believe this hasn't been asked before; apologies if it already has been and I missed on finding it. What I'm trying to achieve is a little 'weird', I guess? So, say for example, I'm POST'ing (or GET'ing from) data to a page that returns the response along with a 'Location' in the header that I want it to follow. Under most circumstances, this would be no problem as I could just grab the 'Location' value from the headers, then follow it. However, consider this scenario: The website/page I'm getting data from is sending me a Location header, and the resulting redirect has yet another 'Location' header. This is random, some redirects will give me the expected file after 2 redirects, others after 4, or so. I absolutely need to record the most recent (last) 'Location' value that was sent to me. How do I do that? I currently have PHP Curl's Follow Location set to 'on' so it automatically follows any/all locations without me intervening. Since it does this, how do I grab the value of the last location it followed? Example, I post data to Website A using php curl. The reply from Website A contains a 'Location' redirect to Website B. PHP Curl follows it. Website B has a 'Location' redirect, yet again, to Website C. PHP Curl automatically follows it. Website C then has a 'Location' redirect to Website D. PHP Curl automatically follows it, as it did before. Website D, then, sends me the actual file I wanted to download. Since PHP Curl is automatically following the 'Location', how do I grab the last Location value? In the example above, how would I grab the 'Location' value in Website C, which was the last one followed before reaching Website D, the end destination? Apologies if I haven't been clear enough, I've tried my best to explain what I want to do; it's just the situation is a bit tricky to put into words. You are making a good start with formulating the question! Do you look for something like this? $last_url = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL); you can find the documentation on curl_getinfo() here http://www.php.net/manual/en/function.curl-getinfo.php As I read your question a second time my answer is probably not the final solution, but you could get the HTTP Header information from the last location and grab the HTTP_REFERER . I don't have an example for this but this could be a way you will get the information you need. But if there's multiple redirections, then this would only capture the previous URL right? So A to B to C to D, this technique would only get C? This question maybe the best approach, just turn off follow location, and manually curl the new page, while recording each URL: http://stackoverflow.com/questions/6129000/curl-follow-location-but-only-get-header-of-the-new-location @SSHThis Your outlined approach is one route to doing it, but in cases where I don't know how many redirects there are, this would be rather in-efficient. The curl_getinfo is what I'm looking for
STACK_EXCHANGE
Turing machines that read the entire program tape Consider a two tape universal Turing machine with a one-way-infinite, read-only program tape with a head that can only move right, as well as a work tape. The work tape is initialized to all zeros and the program tape is initialized randomly, with each cell being filled from a uniform distribution over the possible symbols. What are the possibilities for the probability that the head on the program tape will move infinitely far to the right in the limit? Obviously, this will depend on the specifics of the Turing machine, but it must always be in the range $[0,1-\Omega)$, where $\Omega$ is Chaitin's constant for the TM. Since this TM is universal, $\Omega$ must be in the range $(0,1)$, so the probability must always be in $[0,1)$. Is this entire range, or at least a set dense in this range and including zero, possible? Related question: http://mathoverflow.net/questions/64773/finding-inputs-that-make-an-algorithm-run-forever As far as I can see, if you consider a single TM, then you get only one specific probability, not a dense set, whereas if you let the TM vary then $\Omega$ will vary also, and the set of probabilities will contain all rational numbers in $[0,1]$ (and some other numbers too). If you fix the number of symbols but let the TM vary, it's not so clear that you'll get all the rationals, but you'll still get a dense set. EDIT to take into account the revision of the question: Given a universal TM, you can make trivial modifications that maintain universality but change the probability $p$ of going infinitely far to the right. For example, modify your original machine $M$ to an $M'$ that works like this: If the first symbol $x$ on the program tape is 0, then halt immediately; otherwise, move one step to the right and work like $M$ on the program minus the initial symbol $x$ (and, just to guarantee universality, if the computation halts, go back to $x$, erase it, and move $M$'s answer one step to the left so that it's located where answers should be). That modification decreases the probability $p$. You can increase $p$ by having an initial 0 in the program trigger a race to the right by $M'$ --- it just keeps marching to the right regardless of what symbols it sees. You can achieve some control over the amount by which $p$ increases or decreases by having the modification $M'$ begin by checking more than just one symbol at the beginning of the program. As far as I can tell, such modifications, carried out with enough care (which I don't have time for just now) should give you a dense set of $p$'s. EDIT to add some details: Given a universal TM $M$ with tape alphabet $A$, and given a subinterval of $[0,1]$, choose an integer $n$ so large that your given interval includes one of the form $[k/|A|^n,(k+1)/|A|^n]$. Let $S$ be a set of $k$ words of length $n$ over $A$, and let $w$ be another such word that is not in $S$. Modify $M$ to $M'$ that works as follows. If the first $n$ symbols on the tape are a word from $S$, then march to the right forever, ignoring everything else. If they are the word $w$, then simulate $M$ on the remainder of the tape (the part after $w$), moving any final answer into the right location, as in my previous edit. Finally, if the word consisting of the tape's first $n$ letters is neither in $w$ nor in $S$, then halt immediately. Then the probability that $M'$ moves infinitely to the right will be at least $k/|A|^n$ (the probability that the initial $n$-word on the tape is in $S$) and at most $(k+1)/|A|^n$ (the probability that this $n$-word is either $w$ or in $S$) and therefore within the originally given interval. My question wasn't very well stated; I will revise it. Andreas considered the interpretation of your question where we fix the program and then vary the input. Let me now consider the dual version of the question, where we fix the infinite random input and vary the program. Surprisingly, there is something interesting to say. The concept of asymptotic density provides a natural way to measure the size or density of a collection of Turing machine programs. Given a set $P$ of Turing machine programs, one considers the proportion of all $n$-state programs that are in $P$, as $n$ goes to infinity. This limit, when it exists, is called the asymptotic density or probability of the set $P$, and a set with asymptotic density $1$ will contain more than 99% of all $n$-state programs, when $n$ is large enough, as close to 100% as desired. What I claim is that for your computational model, almost every program leads to a finite computation. Theorem. For any fixed infinite input (on the read-only tape), the set of Turing machine programs that complete their computation in finitely many steps has asymptotic density $1$. In other words, for fixed input, almost every program stops in finite time. The proof follows from the main result of my article: J. D. Hamkins and A. Miasnikov, The halting problem is decidable on a set of asymptotic probability one, Notre Dame J. Formal Logic 47, 2006. http://arxiv.org/abs/math/0504351. The argument depends on the convention in the one-way infinite tape context that computation stops should the head attempt to move off the end of the tape. The idea has also come up on a few other MO quesstions: What are the limits of non-halting? and Solving NP problems in (usually) polynomial time? in which it is explained that the theme of the result is the black-hole phenomenon in undecidability problems, the phenomenon by which the difficulty of an undecidable or infeasible problem is confined to a very small region, outside of which it is easy. The main result of our paper is to show that the classical halting problem admits a black hole. In other words, there is a computable procedure to correctly decide almost every instance of the classical halting problem, with asymptotic probability one. The proof method is to observe that on fixed infinite input, a random Turing machine operates something like a random walk, up to the point where it begins to repeat states. And because of Polya's recurrence theorem, it follows that with probability as close as you like to one, the work tape head will return to the starting position and fall off the tape before repeating a state. My point now is that the same observation applies to your problem. For any particular fixed infinite input, the work tape head will fall off for almost all programs. Thus, almost every program sees only finitely much of the input before stopping. Theorem 3 in the linked paper is exactly the claim that for any fixed input the asymptotic probability one behavior of a Turing machine (in this one-way infinite tape model) is that the head falls off the tape.
STACK_EXCHANGE
package im500.main; import robocode.AdvancedRobot; import robocode.ScannedRobotEvent; import robocode.util.Utils; public class Scanner { private static double CORRECTION_FACTOR = 2.0; private AdvancedRobot robot; public Scanner(AdvancedRobot robot) { this.robot = robot; } public void scan() { robot.scan(); robot.turnRadarRightRadians(Double.POSITIVE_INFINITY); robot.execute(); } public void onScanned(ScannedRobotEvent event) { robot.setTurnRadarRightRadians(CORRECTION_FACTOR * Utils.normalRelativeAngle(robot.getHeadingRadians() + event.getBearingRadians() - robot.getRadarHeadingRadians())); } }
STACK_EDU
M: Greenland really is melting - two independent studies confirm - chrisb http://www.greencarcongress.com/2009/11/greenland-20091115.html R: presidentender Maybe now farming and ranching will be viable there again, as it was when the Vikings were living there. R: camccann I know you're probably joking, but for the record: Greenland was marginally more livable during the medieval warm period, but even so farming and ranching were only barely viable and the Norse settlers never ventured very far north or inland. The... very optimistic name "Greenland" (directly translated from Old Norse) was chosen by Erik the Red because "people would be eager to go there if it had a good name", proving that even Vikings cared about marketing and branding. The vast bulk of the ice sheets predate human civilization and weathered the warm period comfortably; the modern melting situation isn't really comparable. R: KevinMS "The vast bulk of the ice sheets predate human civilization and weathered the warm period comfortably; the modern melting situation isn't really comparable." What is your evidence that, this time, the ice wont "weather the warm period comfortably"? It was warmer in greenland back in the viking days than it is now, but the ice melt is not really comparable? Was the warmth then not really the same quality warmth we have now?
HACKER_NEWS
1. Error message: "windows cannot find 'hl.exe" Issue: After launching the mod with sortcut or !launch.bat, I receive the following message: "windows cannot find 'hl.exe' make sure you typed the name in correctly, and the please try again." Solution: The game can't find the hl.exe because you install it into the wrong folder! When you run the installer and change the directory to point to your Half-Life folder the installer creates another Half-Life folder in your Half-Life folder. To solve the problem uninstall PARANOIA, then reinstall it and only point the directory to \Steam\steamapps\. Then the installer will say "Do you want to use the existing Half-life folder?". Say yes and all will install fine in the right folder. Also you always can edit the install path manually without "Browse" function. 2. Missing graphic effects Issue: The new graphic effects (bump maps, gloss effects, etc.) are missing! What should I do? Solution: Check that the opengl32.dll file is present in your half-life folder. The mod needs this file to display the new graphic effects. If the file is missing, copy it from the paranoia\sdk folder to your half-life folder. Without opengl32.dll in the Half-Life folder, the new graphic effects will not be available. 3. Error message: "The specified video mode is not supported!" Issue: After launching the mod, I receive the following message: "The specified video mode is not supported!" Solution: D3D and Software video modes are NOT SUPPORTED in PARANOIA. The mod works only in OpenGL mode with 32-bit colors. Start the mod by running !Launch_game.bat file located in the mod's folder. 4. Error message: "This OpenGL mode is not supported!" Issue: After launching the mod, I receive the following message: "This OpenGL mode is not supported!" Solution: The mod works only in OpenGL mode with 32-bit colors. Start the mod by running !Launch_game.bat file located in the mod's folder. 5. Music volume too loud Issue: The volume is too loud, but I can't change it in the Audio tab. Solution: Launch the mod and go to Options > Multiplayer > Advanced > MP3 Volume. Change the values from 1.0 to 0.1 and click OK to apply. 6. Low framerate (FPS) Issue: I have a high-end system, but my framerate is too low (10-14 FPS)! How can I improve my framerate? Solution: Update your video drivers and adjust the graphical settings in your video card control panel to improve the display of applications running in OpenGL video mode. 7. Anti-cheat services prevent me from playing online Issue: After installing PARANOIA, I am not allowed to play online with my multiplayer HL mods. The anti-cheat services call me a cheater! What is going on? Solution: The anti-cheat services react to the presence of the opengl32.dll file in the half-life folder. Remove the file to solve the issue, but don't forget to put it back when you want to play PARANOIA. 8. Console commands and parameters Question: I want to tweak the new graphic effects. Which console commands and parameters should I use to control the new graphic effects? - gl_renderer PARANOIA renderer for all the new OpenGL effects - gl_detailtex detailed textures - gl_bump bump mapping - gl_specular gloss mapping - gl_grayscale gray scale - gl_blurtest motion blur (parameters from 1 to 20) - gl_dynlight dynamic lighting - subtitles subtitles 9. Dynamic shadows Question: Where are the dynamic shadows in PARANOIA? Answer: Unfortunately dynamic shadows are not supported by our mod's renderer. 10. Compatibility with Half-Life: Source Question: Does the mod work with Half-Life: Source? Answer: No, it's dont work with HL: Source. The PARANOIA is working with Half-Life (GoldScr engine) only. You need Half-Life v.184.108.40.206 (or higher) to play PARANOIA without issues. 11. Error message: "TextMassegeParsemassegeCount..." Issue: When I click on the new game it begins to load the game and then write this: "tmessage:TextMassegeParsemassegeCount>=MAXMASSAGES" Solution: Your version of Half-Life is outdated. You need version 220.127.116.11 or higher to play Paranoia without issues. Try to update your Half-Life. 12. HUD and crosshairs are gone Issue: When I went to play the game, the HUD and crosshair are gone. How to get it back? Solution: Try to use console command "hud_draw 1" in the console window. If it will not help you should reinstall the PARANOIA. Do not forget to make a backup copy of your game save files! 13. No transparency at the models Issue: There is no transparency at the models in the game. The trees is black and the ballistic face shield is non transparent too. How to fix it? Solution: Your version of Half-Life is outdated. Try to update your Half-Life. p.s. Any questions? Post it here please! ;)
OPCFW_CODE
CSE 422 Lab 2: Creating a Multi-Threaded Chat In this lab you will be making a server application for a chat room. You will need to use threading to listen for a client’s message, as well as wait for any number of clients to connect. We will give an introduction of the Pthread library for this lab. Along with threading, you will also have to use a mutex lock to maintain stability of the list of clients. We have supplied a code skeleton for the server application, including the locking and unlocking of the mutex. We have also supplied the client application source code for you to use, but you should not alter that code, because you will not be turning it in; your server will need to work with the client, as is. application created will be named , which takes as arguments the server hostname and a UDP port, as in the client program in Lab 1. The server is name and takes no command line You will need to download: lab2_server.cc, lab2_client.cc, lab2_packet.h, and the makefile. will compile, but the server will not do anything useful. Overview of the Chat Room Application: The application will be able to allow multiple clients to connect at one time. It will maintain a list of the clients connected at any time. Messages sent by a client will be received by the server and broadcasted to all the clients. Clients are allowed to come and go as they wish, and so the server must be able to handle the clients leaving properly. Clients are also able to check who is online at any time. If the server is called to shutdown, it will inform the clients before closing it's connection and terminating the application. Pthread Library Introduction: Here is a man page for pthread class: http://opengroup.org/onlinepubs/007908799/xsh/pthread.h.html The four functions that you will have to use are: pthread_create(), pthread_detach(), You will also have to use the pthread_t and pthread_create(pthread_t *, const pthread_attr_t *, void *(*)(void *), void *); The first parameter must be a non null pointer to a pthread_t object. The second is a set of attributes to be used and for this project you should use NULL to set the default values. The last two parameters are the function to be called followed by the argument to be passed to the function. If you notice they are void parameters, so you will need to use type casting to pass values to your threaded function. Also, if you want to pass more than one parameter to the threaded function, you should create a struct
OPCFW_CODE
Testing The Big Merge (on test2.freecen.org.uk) Testing FC2 required as per the magnificent merge that's COMING SOON. What is the Great or Magnificent merge? FreeCEN2 is built upon the same code base as FreeREG2 but with an application sensitive setting so that it displays pages using the CEN colour scheme, provides unique CEN functions and disables REG specific functions. It had been expected that FreeREG2 code changes would be incorporated into the FreeCEN2 code on a regular basis . This happened initially but unfortunately stopped 2 years ago. Since FreeREG2 has updated all of the software on which it and CEN are built it was decided that we should bring FreeCEN2 code back up to date with FreeREG2 code. This is the Great Merge. When completed it will provide CEN with all of the communications capabilities built for REG, all of the underlying software component updates, all of the guarding added to REG for bots and incorrect inputs as well as all of the on line member tools and system management. Functionality to test in particular: Click through every screen of the site, looking for (or CTRL-F) "FreeREG" instead of FreeCEN Search on FreeCEN-specific fields like Birth County Verify advertisements and cookie warnings are displayed correctly Test email (especially any contact-us notifications) and "messages" Test the "manage counties" functionality Test transcriber registration Do we have any sort of tag for issues resulting from these tests? I don't think so. What would you like? MergeBug? MergeBug created. On "Test the manage counties functionality" - would this be best done by someone familiar with FreeREG and how the functions should work? Testing Volunteer sign up seems to work fine - but changing password generates an error: Have updated the userid_details collection to reflect production. This means all Members of production can log in to the system for testing. Corrected a few merge issues on the Member actions. Removed Actions unique to FreeREG Volunteering and password reset both work as expected on test2 All aspects have been tested to the best of my ability as someone who does not use FreeCEN on a regular basis. Summary of where we are: Master has been merged into freecen_parsing by @benwbrum , tested and bugs fixed by myself. Brought into step with the current master yet again. It is now running on test2 cen with no known issues. Recommend that the merged branch replace freecen_parsing in production and become the base code for CEN Development. Master should be merged into the merged branch every 2 weeks, These 2 steps are essential to avoid future divergence and having to do this all again. Ugh @richpomfret @Vino-S @PatReynolds Just went to merge freecen_parsing into master-into-freecen and guess what 6 areas of conflict!!! We have to get away from updating parsing or we will spend hours resolving conflicts I too agree with @Captainkirkdawson . We need to stop working on freecen_parsing in a order to stop ourselves from creating more conflicts. We can continue once the merge is completed. Can I suggest to update only master every sprint as most of the time we have common requirements and changes for both REG and CEN? We can filter on code level for fixes of FreeCEN/ FreeREG specific issues. Then we will update the freecen branch with master every sprint, so that we have same code base consistently as long as we can. We think that testing is complete and that this branch is ready for deployment. We may need a plan for deployment to production. The master-into-freecen has been superseded by the multi modal master branch which is the common MyopicVicar code base that supports our applications. We should test this and place into production rather than master-into-freecen We need to test both FreeREG and FreeCEN on test2 with the multi-modal code. @Vino-S is also testing the VLD/build processing files. This is the branch that needs to be deployed; there is no reason to deploy the interim master-into-freecen branch instead. Denise has tested the base-line search for FreeCEN with citations etc. We need a more comprehensive search of e.g. CEN-specific searches. Closing - any new bugs/issues that arise on LIVE can be added as specific stories.
GITHUB_ARCHIVE
#ifndef DGB_CPU_H_ #define DGB_CPU_H_ #include <atomic> #include <chrono> #include <cstdint> #include <memory> #include <thread> #include "clock.h" #include "interrupts.h" #include "common/logging.h" #include "memory.h" namespace dgb { const uint16_t kInterruptRequestAddress = 0xFF0F; const uint16_t kInterruptEnableAddress = 0xFFFF; const uint16_t kInterruptHandlers[5] = { 0x40, 0x48, 0x50, 0x58, 0x60 }; struct DebugOp { // Opcode uint8_t code; // Number of bytes of data to consume uint8_t length; // Additional data used by the opcode if length > 1 uint8_t data[2]; // Debug string explaining the op std::string debug_string; // Program counter uint16_t pc; }; class CPU { public: CPU(std::shared_ptr<Clock> clock, std::shared_ptr<Interrupts> interrupts); uint8_t Read8(uint16_t address, Memory *memory); uint16_t Read16(uint16_t address, Memory *memory); void Write8(uint16_t address, uint8_t value, Memory *memory); void Write16(uint16_t address, uint16_t value, Memory *memory); bool ProcessInterrupts(Memory *memory); void PrintRegisters(); void PrintExecutionFrame(int num_instructions, Memory *memory); bool RunOp(Memory *memory, int *cycle_count); bool RunPrefix(uint8_t code, Memory *memory); void Wait() { thread_.join(); } void Kill() { is_running_.store(false); } bool IsRunning() { return is_running_.load(); } // Initializes the CPU's registers as if they are at the end of the bootloader. void InitRegisters(); // Starts the CPU loop in a separate thread. // TODO: make memory an injected instance variable void StartLoop(Memory *memory); // Runs the CPU loop on the calling thread. void Loop(Memory *memory); void set_pc(uint16_t value) { pc_ = value; } uint8_t pc() { return pc_; } void set_breakpoint(uint16_t value) { breakpoint_ = value; } void set_breakpoint_opcode(int16_t value) { breakpoint_opcode_ = value; } void set_paused(bool value) { paused_.store(value); } bool paused() { return paused_.load(); } void set_debug(bool value) { debug_.store(value); } bool debug() { return debug_.load(); } void set_preop_callback(std::function<void(DebugOp)> f) { preop_callback_ = f; } enum FlagsMask { ZERO_FLAG = 0x80, SUBTRACT_FLAG = 0x40, HALF_CARRY_FLAG = 0x20, CARRY_FLAG = 0x10 }; protected: // Op helpers. // TODO: make these all self-contained (not dependent on instance variables) // so they're easier to test. uint8_t LoadData8(uint8_t *dest, Memory *memory); uint16_t LoadData16(uint16_t *dest, Memory *memory); uint8_t LoadData8ToMem(uint16_t dest_addr, Memory *memory); void LoadReg8(uint8_t *dest, uint8_t value); void Inc8(uint8_t *value); void Inc16(uint16_t *value); void Dec8(uint8_t *value); void Dec16(uint16_t *value); void Add8(uint8_t *dest, uint8_t value); // ADD // Performs an 8-bit ADD, but allows the arguments to be 16-bit numbers to // properly account for operands greater than 0xFF, for example to implement // ADC. uint8_t Add8With16(uint16_t a, uint16_t b); void AddCarry8(uint8_t *dest, uint8_t value); // ADC void Add16(uint16_t *dest, uint16_t value); void Sub8(uint8_t *dest, uint8_t value); // SUB // Performs an 8-bit SUB, but allows the arguments to be 16-bit numbers to // properly account for operands greater than 0xFF, for example to implement // ADC. uint8_t Sub8With16(uint16_t a, uint16_t b); void SubCarry8(uint8_t *dest, uint8_t value); // SBC void DecimalAdjust(uint8_t *dest); // DAA void Cp(uint8_t value); void And(uint8_t value); void Or(uint8_t value); void Xor(uint8_t value); void Push(uint16_t value, Memory *memory); void Pop(uint16_t *dest, Memory *memory); void Jump(bool do_jump, Memory *memory); uint8_t JumpRelative(bool do_jump, Memory *memory); void CallA16(bool do_call, Memory *memory); void Return(Memory *memory); void CCF(); void Halt() { halted_ = true; } uint8_t RotateLeft(uint8_t value); // RLC uint8_t RotateLeftThroughCarry(uint8_t value); // RL uint8_t RotateRight(uint8_t value); // RRC uint8_t RotateRightThroughCarry(uint8_t value); // RR uint8_t ShiftLeft(uint8_t value); // SLA uint8_t ShiftRight(uint8_t value); // SRL uint8_t ShiftRightIntoCarry(uint8_t value); // SRA uint8_t Swap(uint8_t value); void TestBit(uint8_t value, unsigned int bit_index); uint8_t SetBit(uint8_t value, unsigned int bit_index); uint8_t ResetBit(uint8_t value, unsigned int bit_index); std::thread thread_; std::atomic<bool> is_running_{true}; std::atomic<bool> paused_{false}; std::atomic<bool> debug_{false}; bool halted_ = false; // Memory address at which to break. int64_t breakpoint_ = -1; // Another temporary breakpoint, used for running only up to a new point in memory (e.g. for the // 'next' debug operation). int64_t temp_breakpoint_ = -1; // Opcode at which to break (it's actually just a uint8, but expanding to // 16-bit signed allows for negative numbers to disable it). int16_t breakpoint_opcode_ = -1; // Memory write address at which to break (actually uint16). int32_t breakpoint_write_min_ = -1; int32_t breakpoint_write_max_ = -1; // Memory read address at which to break (actually uint16). int32_t breakpoint_read_min_ = -1; int32_t breakpoint_read_max_ = -1; // If true, prints the execution frame on every debug step. bool watch_frame_ = false; uint16_t previous_pc_ = 0; std::string previous_debug_command_; std::function<void(DebugOp)> preop_callback_; // Program counter. uint16_t pc_ = 0x0; // Stack pointer. uint16_t sp_ = 0x0; uint16_t af_ = 0x0; uint8_t *a_ = reinterpret_cast<uint8_t*>(&af_) + 1; // Flags register. // Bit: 7 6 5 4 3 2 1 0 // Flag: Z N H C 0 0 0 0 uint8_t *f_ = reinterpret_cast<uint8_t*>(&af_); uint16_t bc_ = 0x0; uint8_t *b_ = reinterpret_cast<uint8_t*>(&bc_) + 1; uint8_t *c_ = reinterpret_cast<uint8_t*>(&bc_); uint16_t de_ = 0x0; uint8_t *d_ = reinterpret_cast<uint8_t*>(&de_) + 1; uint8_t *e_ = reinterpret_cast<uint8_t*>(&de_); uint16_t hl_ = 0x0; uint8_t *h_ = reinterpret_cast<uint8_t*>(&hl_) + 1; uint8_t *l_ = reinterpret_cast<uint8_t*>(&hl_); // Interrupt Master Enable flag. // TODO: default to false? bool ime_ = false; // Interrupt registers. uint8_t interrupt_request_ = 0; // 0xFF0F uint8_t interrupt_enable_ = 0; // 0xFFFF std::shared_ptr<Clock> clock_; std::shared_ptr<Interrupts> interrupts_; }; class TestCPU : public CPU { public: using CPU::CPU; // Register accessor methods uint8_t get_a() { return *a_; } uint8_t get_b() { return *b_; } void set_b(uint8_t val) { *b_ = val; } uint8_t get_f() { return *f_; } uint16_t get_hl() { return hl_; } void set_hl(uint16_t val) { hl_ = val; } uint16_t get_sp() { return sp_; } void set_sp(uint16_t val) { sp_ = val; } }; } // namespace dgb #endif // DGB_CPU_H_
STACK_EDU