url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.chartattack.com/most-of-your-cloud-service/
code
Performance happens to be a key at the time when it comes to enjoying the benefits that belong to cloud computing. An existing slow or unreliable cloud service has the ability to cause frustration as well as lost productivity that is going to belong to users, so it is important to do everything you can to ensure your cloud service is running as smoothly as possible. Here are a few things you can do to improve cloud performance: Use the right tools to monitor your cloud performance There are a lot of different tools that you can use to monitor your cloud performance. Some of them are paid, and some of them are free. You need to decide which ones you want to use based on your needs as well as your budget. Use a CDN for static content Use caching wisely The main goal of a cache is to improve data retrieval performance by minimizing access to the slower storage layer beneath. A cache often stores a subset of data transiently, sacrificing capacity for speed, as opposed to databases, whose data is normally comprehensive and long-lasting. Consider employing a content delivery network for files that are often visited or that don not change frequently since caches increase access speed. Performance can be further enhanced by employing several caches, allowing users to retrieve data from the place that is closest to them. Caching can be a great way to improve performance, but it needs to be used wisely. Caching static content (such as images or CSS files) can improve performance, but caching dynamic content (such as database queries) can actually cause problems. Optimize your code The way your code is written can have a big impact on performance. It is important to write code that is efficient as well as easy for the server to process. Use a content delivery network (CDN) A CDN is a network of servers that distributes material from an “origin” server to end users across the globe by caching content nearby each user’s point of internet access. Use a load balancer A load balancer is a piece of hardware or software that evenly distributes traffic across a group of servers. This can improve performance by ensuring that no one server is overloaded with requests. This works by distributing traffic across a group of servers. So, if one server gets overloaded with requests, the load balancer will send some of those requests to another server. Ensure that instances are not overloaded by employing a load balancer to divert traffic. You may gather information on the performance of your instance by utilizing a load balancer, and you can utilize these analytics to spot and address any problems. Be prepared for spikes in traffic Spikes in traffic can cause problems for even the best-performing cloud services. If you know that you are going to get a spike in traffic (for example, during a product launch), make sure you have enough capacity to handle the increased demand. To do this, you might need to add more servers or use a larger instance size. Every time a job requires manual intervention, you increase the chance of mistakes and slow down the process. As much of your maintenance and support activities as you can should be automated using automation and orchestration solutions. Autoscaling is a feature of some cloud services that allows you to automatically add or remove capacity based on demand. This can be a great way to ensure that you have enough capacity to handle spikes in traffic without overspending on the capacity that you do not need. Keep an eye on your resources Your cloud service will only be as good as the resources it has to work with. Make sure you have enough CPU, memory, and storage for your needs. If you are using a shared hosting environment, make sure also you are not sharing these resources with too many other users. Make sure your applications are up to date Outdated applications can cause all sorts of problems, including performance issues. Make sure you are using the latest versions of all the software you are using. This includes the operating system, web server, database server, and any other applications you are using. Use a monitoring tool Monitoring tools can help you keep an eye on the performance of your cloud service. They can give you detailed information about what is going on with your servers and also applications. This information can be invaluable when troubleshooting performance issues. Keep an eye on the logs The logs can be a great source of information when troubleshooting performance issues. Make sure you are monitoring the logs for your cloud service so you can quickly identify and fix any problems. Overall, there are a lot of things you can do to improve the performance of your cloud service. By following the tips in this article, you can ensure that your service is running as efficiently as possible.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00735.warc.gz
CC-MAIN-2023-40
4,820
28
http://www.javafile.com/tickers/vticker/vticker.php
code
Scrolling Ticker Java Applet... very configurable and easy to use. Some explanations:Most of parameters are obvious, but MESSAGE needs to be explained. Messages should be enumerated from 0 to n, without missed numbers. If any number is missed, then the applet won't search for the next message. Message value includes several tokens separated by "|" "headerfontsettings |HeaderItself |messagefontsettinfs |MessageItselfHere |URLtoOpen" Fontsettings consists of Fontname, FontStyle and FontColor (hexadecimal value without radix) separated by space character. You can use one of the following platform independent fonts: Fontstyle is an integer number 0 for PLAIN, 1 for BOLD and 2 for ITALIC (you can use 3 for BOLDITALIC) Author: Maxim V. Kollegov
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00143.warc.gz
CC-MAIN-2019-13
748
12
http://www.javaprogrammingforums.com/member-introductions/39167-introducing-myself.html
code
Hello. I just signed up and would like to introduce myself to the community. I've been learning/using Java for about five years. It was the first programming language I learned, and for the most part, my favourite. I've learned other languages in the past few years, but I always go back to my roots in Java. I'm actually interviewing for a Java Developer role tomorrow, so perhaps soon I can add professional Java Developer as a title. Anyhoo, was happy to run across this forum and I hope to contribute something positive to it.
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982996875.78/warc/CC-MAIN-20160823200956-00273-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
530
2
https://jira.mongodb.org/browse/WT-8959?attachmentOrder=asc
code
The atomic operations in WiredTiger should ensure that a full memory barrier is provided by them. The lock-free algorithms in use by WiredTiger rely on the instructions not getting re-ordered across the atomics. On GCC WiredTiger uses the __atomic builtins to implement its atomic operations. The __atomic builtins provided by the compiler could potentially not be utilizing a full memory barrier instruction on some platforms with a relaxed memory model like AArch64 (ARM64). This could leave WiredTiger exposed to a data corruption possibility on such platforms. WiredTiger should make sure the underlying primitives it uses include a full memory sync each time they are used. A thread on GCC discussing why __atomic builtins might not be strong enough for the older __sync builtins. Another thread discussing how AArch64 atomics might allow subtle re-orders. - Does this affect any team outside of WT? - How likely is it that this use case or problem will occur? Arguably possible on x86-64. Very rare, but possible on AArch64 (ARM64). - If the problem does occur, what are the consequences and how severe are they? On the platforms like ARM64 that have a relaxed memory model for the SMP architecture, it could result in subtle data corruption bugs. - Is this issue urgent? Acceptance Criteria (Definition of Done) All the atomic operations have been guaranteed to provide a full memory barrier on the various platforms WiredTiger supports. The performance impact has been evaluated and found to be acceptable. WiredTiger stress testing and MongoDB patch testing has been successfully completed. WiredTiger stress tests, MongoDB patch tests, WiredTiger and MongoDB performance tests. Change the documentation on building on the POSIX systems to ensure the build provides a full barrier that WiredTiger needs. The scope for this ticket has been limited to an investigation into the atomic operations on the ARM platform. New tickets will be created for any follow on work. [Optional] Suggested Solution A possible way is to use __sync builtins instead of __atomic builtins as they are meant to provide a full barrier. The modern __sync builtins are implemented using the __atomic builtins, so we have to be careful about doing so. Newer versions of GCC should handle this correctly since a change went into GCC 5.0 to provide __atomic builtins with a full barrier to support the __sync builtins. Another possibility could be to directly use the MEMMODEL_SYNC with __atomic builtins that was introduced to fix the __sync builtins. We will have to investigate the facilities provided by the MSVC compiler.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00131.warc.gz
CC-MAIN-2023-50
2,606
19
http://pbem.online/wiki/pmwiki.php?n=FarCornersOfTheEarth.HomePage
code
Far Corners of the Earth Welcome to the web site for the Far Corners of the Earth play by e-mail game. This site is intended for use by my players to view their characters, game notes, prop images, and so forth. Special thanks to Charles for providing us with this site and our mailing lists! Please direct any questions about the game itself to gmredux at yahoo dot com.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00796.warc.gz
CC-MAIN-2024-10
371
2
https://lists.boost.org/MailArchives/ublas/att-6126/attachment
code
Hey Cem.Thanks for replying. Will do the needful and get back to you.I understand that the proposal could do with a few more code examples.Thanks.On Wed, Mar 18, 2020 at 8:15 PM Cem Bassoy via ublas <email@example.com> wrote:Hi Aniket,thanks for considering Boost/uBLAS. Your potential mentor is David. If he does not respond, just follow the instructions on the GSoC page and upload your proposal.Please note that we appreciate concrete (with code samples), realistic (regarding time) and referring (to previous or pull requests) proposals. Concrete examples inside the proposal will help us to understand intention and it will demonstrate your experience and expertise. Additionally, I advise to read - previous gsoc discussions on this or on the general boost mailing list - former gsoc student projects and proposalsto have a good feeling of the requirements.Best,CemAm Mo., 16. März 2020 um 23:15 Uhr schrieb Aniket Chowdhury via ublas <firstname.lastname@example.org>:Hi,My name is Aniket Chowdhury and I am second-year undergraduate student.I wish to implement the DataFrame library for Boost. I have made the project proposal and am attaching the same.As, this project is an expansion of the previous project there are a few directions that I would like to work in whichever adds more to the project.1. We could be to keep the existing code base as it is and implement new features on top of it.2. We could try to restructure the code into Modules(C++20 feature) and then implement the features.3. Or, we could restructure the code without ET but rather using C++20 One Ranges. This makes more sense to me as I believe that uBlas is being ported to C++20.The DataFrame would include the following features:1. All the features already present(union, combine, join).2. Read/write from DataFrame using JSON(boost::property_tree to DataFrame)3. Operator support for addition, subtraction, multiplication, division, modulo and power, etc as well as support for comparison operators.4. Functions to perform apply, apply_element_wise, aggregate, transform and expand on a given DataFrame.5. Data analysis tools for standard deviation, variance, mean, etc.6. Re-indexing methods like replace, duplicate, filter, etc.7. Reshaping methods for sorting, append, pivot, etc. Full Details of the same can be found in the Project Proposal.I have been in contact with David Bellot for the past week regarding GSoC. I have completed the first draft of the competency check and sent the same to him. I would love to work I am requesting him to be assigned as a mentor for the same.PFA: The Project Proposal(linked in case the attachment fails) ______________________________________________________________________________________________This is an official application for the GSoC' 20. I am open to any and all suggestions.Aniket Chowdhury ublas mailing list Sent to: email@example.com ublas mailing list Sent to: firstname.lastname@example.org
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00449.warc.gz
CC-MAIN-2022-05
2,937
10
https://code.agilescientific.com/welly/changelog.html
code
0.5.2, 28 February 2022# 0.5.1, 18 February 2022# Curve.valuesnow returns a 1D array for a 1D curve. Project.basis_rangeto provide the min and max of all curves in the project. Fixed bug #202 with curve indexing. Fixed bug #206 that prevented quality tests from running on aliased curves. Fixed bug #207 that was causing the quality table not to render correctly in some situations. 0.5.0, 14 February 2022# Major change: Everything in wellyis now much closer to Curveobjects are now represented by wrapped pandas.DataFrameobjects (note: not Seriesas you might expect, so they can be two-dimensional). They were previously subclassed NumPy ndarrayobjects, so while we’ve tried to preserve as much of the API as possible, expect some changes. Please let us know if there’s something you miss, it can probably be implemented. Many thanks to the developers that made this happen, especially Patrick Reinhard and Wenting Xiong in the Netherlands. Major change: as previously indicated, the default behaviour is now to load the depth curve in its original units. That is: wellyno longer converts everything to metres. Use from_las()to get the old behaviour. Major change: the well.header, is currently a large pandas.DataFramecontaining everything from the LAS file’s header. In the next minor release, we will restore something more like the original header object. We welcome opinions on how this should work. Curveobject should be instantiated with You can now create a project with welly.read_las('path/to/*.las'). Note: this always gives you a project, even if it only contains a single well. You can get the single well from a path like 'data/myfile.las'with a singleton assignment like well, = welly.read_las('data/myfile.las'). As previously indicated, dogleg severity is now given in units of degrees per course length. kwargsare passed to Project.from_las(), so you can add things like ignore_header_errors=True. See the Lasio documentation for more on these options. A new argument on well.to_las()allows you to control the case of the mnemonics in the output LAS file. The behaviour has always been to preserve the case in the data dictionary; choose ‘upper’, ‘title’ or ‘lower’ to change it. New docs! They are live at code.agilescientific.com/welly. Feedback welcome! 0.4.10, 22 June 2021# No longer supporting versions of Python before 3.6. Curve.top_and_tail()has been implemented. It removes NaNs from the start and end — but not the middle — of a log. You can now optionally pass any of Well.unify_basis(). These settings will override the basis you provide, or the basis that Well.survey_basis(). I added an example of using this to the Relatedly, if you pass any of Curve.to_basis()it will override the basis you give it, if you give it one. Welly now uses wellpathpyto convert deviation data into a position log. The API has not changed, but position logs can now be calculated with the high and low tangential methods as well. Dogleg severity is still given in radians, but can be normalized per ‘course length’, where course length is a parameter you can pass. Future warning: from v0.5.0, dogleg severity will be passed in degrees and course length will be 30 by default. 0.4.9, 29 January 2021# Fixed a bug that was preventing Alias names from appearing in the DataFrame view, well.df. Updated the Projecttutorial to reflect this. Fixed a bug that was preventing Aliases from applying properly to well plots. Improved the error you get fro w.plot(tracks=[...])if there are no curves to plot (e.g. if none of the names exist). 0.4.8, 11 December 2020# tutorialsa bit and made sure they all run as-is. Location.from_petrel()function accepts a Petrel .devdeviation file. It will extract the x and y location, and the KB, as well as the position log and/or deviation survey. Curve.plot_2d()now handles NaNs in the curve. The test functions now accept a keysargument to limit the number of items the tests will be applied to, or to order the appearance of curves in qc_table_html. For example, if you pass keys=['GR']then tests will only be run on w.data['GR'], regardless of what’s in the testsdictionary. This was issue #104. You can now pass a from_las. Thank you to Kent Inverarity for implementing this feature. YCOORDas standard fields; these are read in as Project.plot_map()to make a quick (ugly) scatter plot from x and y location (whatever two field you provide from the Project, and deprecated find_wells_without_curve(). You can make complex selections with this function, such as “give me all the wells that have at least two of RHOB, DTC or DTS”. Added the recently added indexargument (to preserve depth units) to The LAS header items EKB and EGL are now captured as w.locationobject. KB and GL are captured as Thank you Miguel de la Varga for an update that allows a trajectory to have fewer than 3 points. Thank you DC Slagel for an update that ensures all well header fields are populated with valid types. 0.4.7, 6 June 2020# Load your well in feet! The number one most hated ‘feature’ has been ‘fixed’… you can now pass the Well.from_lasio()to control how the index is interpreted. Use 'original'to keep whatever is specified in the LAS file (probably what you want). To convert to metres, use 'm'; to convert to feet use In the next point release, v0.5, we will change the default behaviour to 'original', so if you want to keep forcing to metres, you’ll have to change your code to Well.from_las(fname, index='m'). There is a Curveobject now has a basis_unitsattribute carrying this information. Either Thank you to Kent Inverarity for implementing this long-hoped-for feature. 0.4.6, 7 May 2020# Big fix in 0.4.5, 14 November 2019# Allowed adding the NULL value when writing an LAS file with 0.4.4, 22 October 2019# Dropped support for Python 2.7 and Python 3.4, and added support for Python 3.7 and 3.8. location, whose changes were inadvertently rolled back. 0.4.3, October 2019# You can now pass an Well.df(), along with the list of keys. You can pass A new function, location.trajectory(), generates an evenly sampled trajectory, given a sample spacing in metres. location.plot_3d()for plotting well trajectories. Added a new tutorial notebook, tutorials/Location.ipynbto demonstrate the well path capabilities of Well.location(). The notebook does not cover geographic CRS’s. There’s still a short example in Fixed some buggy behaviour when creating ‘empty’ wells, and added example to top of You can now pass a URL directly to Well.from_las()and it will try to read it. 0.4.2, April 2019# Implemented basis updating when slicing. In general, you probably want to ‘slice’ (get a subcurve) using curve.to_basis()because you can use depth to get the section you want. But if you want to use indexing, like curve[100:110], this operation now updates curve.basisis therefore updated. utils.top_and_tailnow only works on single arrays, and returns a single array. 0.4.1, 24 November 2018# Fixed a bug in project.df()that was building the DataFrame incorrectly. 0.4.0, 20 November 2018# There are breaking changes in this release. Export the curves in the current well.datato Pandas DataFrame with well.df(). Previously, this function returned the DataFrame of the associated LAS file, which is still available in Export the curves in the current Project as a Pandas DataFrame with a dual index: UWI and depth. Made the APIs of various functions more consistent, e.g. with keysalways being before basis. This regularization will continue. Made the way to retrieve keysmore consistent, using the flattened list of keys, if provided, or getting all those keys corresponding to curves, if not. Some of the well methods used to break if there were striplogs in well.data, but they should behave a bit better now. Thanks to Jesper Dramsch, the documentation should now be working again. Thanks Jesper! Synthetics don’t work anyway and are definitely broken right now. Test is withheld for now.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00241.warc.gz
CC-MAIN-2023-14
7,983
127
https://www.techgroups.com/opportunities/opportunity/61104/
code
As a member of our Software Engineering Group we look first and foremost for people who are passionate around solving business problems through innovation & engineering practices. You will be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative, trusting, thought-provoking environment-one that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. Design, implement and test all Java/J2EE requirements Ensure compliance to all requirements and maintain accuracy for the same and design technical system documents and architectural standards. Maintain a record of maintenance releases for all applications and maintain knowledge of enhancements. Recommend enhancements to designs and prepare software for reuse. Resolve emergency production issues and ensure appropriate resolution of issues within the required time-frame. Document designs and perform unit tests and develop applications. Code for system design and prepare efficient application programming interfaces. Design and create test conditions, behaviors/scenarios and scripts to address business and technical use cases Use existing tools and techniques for managing and delivering large scale projects Lead the Development team by example and ensure that we have adequate Unit Test Cases, Test Stubs and Drivers, and other Development test objects. Participate and resolve L3 issues that occur and ensure we design resiliency management and performance tune the application to minimize frequently occurring issues This role requires a wide variety of strengths and capabilities, including: * BS/BA degree or equivalent experience * Advanced knowledge of application, data and infrastructure architecture disciplines * Understanding of architecture and design across all systems * Working proficiency in developmental toolsets * Knowledge of industry wide technology trends and best practices * Ability to work in large, collaborative teams to achieve organizational goals, and passionate about building an innovative culture * Proficiency in one or more modern programming languages * Understanding of software skills such as business analysis, development, maintenance and software improvement * Experience of working on AGILE * Involvement in planning, refinement, estimation of stories / JIRAs * Design and development responsibilities for the work / module assigned * Involvement in design reviews, code reviews, unit test implementation, code fixes, etc. * Responsible for end to end delivery of the modules / sub-projects adding to overall business requirements * Responsible for production support of the applications being worked upon by the team DBMS concepts / SQL queries Messaging framework : Kafka Caching frameworks: Redis / GemFire Web services, JSON, AJAX Good to Have skills: Continuous Integration (CI) Angular JS, React JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world's most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as any mental health or physical disability needs. Equal Opportunity Employer/Disability/Veterans It's easy, and free! Add jobs from any website! Get recommendations from your friends! Start by adding this job...
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00410.warc.gz
CC-MAIN-2021-25
4,500
38
https://hondaswap.com/threads/take-me-to-the-prom-in-this.3320/
code
We may earn a small commission from affiliate links and paid advertisements. Terms Originally posted by xlfusionxl@Dec 23 2002, 01:50 PM hey brian, i dont know how fun it would be to ride in that thing alone. lol, can you find a date? super stretch excursion i was in last yr at prom > * Originally posted by dohcvtec_accord@Dec 23 2002, 02:31 PM I worked as a lifeguard in college (guarding at a college pool is the best job EVER), and our motto was "We swim like fish and drink like them too."
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00256.warc.gz
CC-MAIN-2022-49
495
6
https://foundations.projectpythia.org/preamble/how-to-cite.html
code
The material in Pythia Foundations is licensed for free and open consumption and reuse. All code is served under Apache 2.0, while all non-code content is licensed under Creative Commons BY 4.0 (CC BY 4.0). Effectively, this means you are free to share and adapt this material so long as you give appropriate credit to the Project Pythia community. If material in Pythia Foundations is useful in published work, you can cite a specific version of the book as: Rose, B. E. J., Kent, J., Tyle, K., Clyne, J., Banihirwe, A., Camron, D., May, R., Grover, M., Ford, R. R., Paul, K., Morley, J., Eroglu, O., Kailyn, L., & Zacharias, A. (2023). Pythia Foundations (Version v2023.05.01) https://doi.org/10.5281/zenodo.7884572
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510994.61/warc/CC-MAIN-20231002100910-20231002130910-00811.warc.gz
CC-MAIN-2023-40
717
3
http://nepacrossroads.com/about37028-210.html
code
coal stoker wrote:... I also have a problem, i am trying to figure out why my DHW temp is through the roof. It is like the aquastat on the indirect has no effect, Then it dawned on me, I think that the circulator I have between the boilers is forcing in past the DHW circulator and pushing the DHW to my high limit setting. I am no expert but this makes sense to me. Just need some help before someone gets hurt. I also lowered the 3 speed to the lowest setting, hoping this will help with forcing past the DHW zone circulator. Thanks in advance, This happens with my indirect too, unless there is a constant call for heat (like today - in the single digits outside). I replaced the internal flow-check in the Taco circulator, but that's not enough to hold back the stoker's Taco circulator - it just forces the water right past on warmer days, and DHW temp will be whatever boiler temp is. Over 150° sometimes. I've got used to it. The fact that this ghost-flow stops on a high heat demand actually works out better for me. Our house is poorly insulated, so the baseboard heaters need all the help they can get.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123046.75/warc/CC-MAIN-20170423031203-00583-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,113
10
https://confederate150.com/delta-math-hack-github/
code
Delta Math Hack Github. Hack forums is the ultimate security technology and social media forum. Delta math answers hack reddit. I speak for us all when i say this. During the vernal equinox in march, i performed a less elegant version of the same experiment and found the latitude of vancouver. Delta math code and hw answer keys. Answer To A Division Problem, If You Are Locking For The Answer To This Question The Scroll Down To See The Answer The Answer Of A Division Problem Is Called. I just released a little chrome extension, xhub, that lets you use latex math (and more) in github pages. I speak for us all when i say this. Two shadows and the distance between them. Eratosthenes Measured The Size Of The Earth Using Three Data Points: Nov 04, 2020 avoid prodigy math game hack cheats for your own safety, choose our lvl 999 2019 zearn math is prodigy math game level 100 hack that aug 14, 2020 prodigy math game cheat codes and hacks in 2020 [working!] pastebin.com is the number one paste tool since 2002. Remember to bookmark this page so you can easily return. Where if a hack is detected, it sets ur uuid to 255 which stiops u from doing damage. If You Want A Literal Asterisk, Underscore, Or Any Other Character That Is Used In Markdown, Put A Backslash Before It Most of my advanced projects are web based, very familiar with js, python, some c++,some java, competed in some national ctf high school competitions and did the decently (2nd most pts on my team), messed around with some applications on kali linux. Practice thousands of math and language arts skills at school, at home, and on the go! Lacking one data point, i invoke an imaginary friend in whistler to help me. To Put It Simply, It Modifies The Game Files To Make It Easier To Hack. I don't know how to help you but i hope someone will. The easiest way to solve it is to just “hack” the stem values +1, so the fix boundry toggle does this. Ij= 1 if i= j, 0 otherwise rf(x) gradient of the function fat x r2f(x) hessian of the function fat x a> transpose of the matrix a sample space p(a) probability of event a Then Click On Any Of The Keypad Buttons. Instantly share code, notes, and snippets. To use pin the chrome extension and click on it in any deltamath problem for the answer @connorlapping i appreciate your quick response to this hack being patched, but the problems that have risen with this new update to membean have not been resolved regarding this amazing extension.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00398.warc.gz
CC-MAIN-2022-49
2,465
12
https://forum.uipath.com/t/connecting-multiple-attended-bots-to-an-citrix-image-on-aws/194987
code
The problem I am trying to solve for a client is that they have a production Citrix Image on an AWS server. I have experience in connecting the traditional bots from a VDI to Orchestrator, Unattended bots, I have also connected Attended bots to a hand full of users at a time. The client has recently upgraded from their mainframe system to AWS, which is a nice upgrade for the client. Traditionally you access the “Machine” tab on Orchestrator and add the machine name, but in this scenario the machine name ie AWSUS123 will always change AWSUS321, AWSUS456, etc… Can anyone offer a solution to connect 100’s of users to a always changing Machine Name? Also what goes hand and hand with this is that as the Machine Name changes so will the Machine Key. Thank you for any possible solutions.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00580.warc.gz
CC-MAIN-2021-43
799
5
http://3b6db1a43aab3aa9e04e8a9348510b7c.patsdayfund.com/wisefixer-windows-7/
code
Wisefixer Windows 7 If you receive this error message, you can try to send your data to the Web master. Restarting the server can resolve the error. For more information please contact your provider services like AOL or Agava, Axia NetMedia, addressing things that caused the overloaded server. You will learn how to solve the problem on your own! We found this annoying dll error message when you run a Windows application or more? Try the application that has the problem to uninstall completely. Download, run the file from the original installation as an administrator of your system. Inspection and correction, and the error in the Device Manager, then need to first update / li, then downgrade/uninstall/li. An error message that tells you that an application is unresponsive, as if he had already discovered often is a first indication that something goes wrong. But this may be as a positive sign, because it, that means Windows error reporting in action. Windows error reporting is a feature that can report the problem to Microsoft information. (In fact, even if the programs that are in the Windows problem reports to see the functionality and service CalledWindows error, the term.)Microsoft provides this information for developers of Thatcaused program error (when Microsoft or another supplier), so that they develop the errors that occur frequently and finally to see more solutions to the problems. Windows error reporting has been simplified and improved in the latest versions of Windows. In Windows XP, the system mainly manual; was If an error occurred, they were prompted to send an error report to Microsoft. Through an error report when a Solutionhad is that it is a lengthy and frustrating process to see. In addition to improvements in the Windows error reporting Windows offers Applicationdevelopers to restart a number of integrated features and application, so please to Respondmore that hang it and crashes. An application with this functionality are written Probablyrespond a crash restart and open the document that you are working with. If you are using Microsoft Office 2007 or later you may already have seen these features of the recovery Startenund in action. In the course of time, you can expect to wisefixer windows 7 see more applications to these functions. Not all of the problems that can occur are catastrophic events such as this Causanlos, the Windows error reporting feature to intervene. To a variety of othertypes problems, big or small, Windows 7 contains a library of solving the problems of the participants. You see a list of them opening in the center of the action, if you click on troubleshooting. Immediately you solve Windows errors. .
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00081-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,694
2
http://born-critic.tumblr.com/post/29778212425
code
Fangirl-isms and smart-ass comments. Yep, that's pretty much what I'm about. I generally follow back as long as the blog following me does not contain a lot of snuff / gory / violent imagery, a lot of sexually explicit material, or many weight-loss related posts. Nothing personal, I just don't want to see that stuff on my Dash every time I scroll through it. See you guys around :). Oh and if you're a fan, visit my Sailor Moon blog as well:
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065002/warc/CC-MAIN-20131204131745-00019-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
443
4
https://frontendmasters.com/courses/angular-2/the-anatomy-of-a-build-system/
code
This course has been updated! We now recommend you take the Angular 9 Fundamentals course. Transcript from the "The Anatomy of a Build System" Lesson >> Scott Moss: I'm going to show you guys a working example of a minimal Hello, World build system that you can use for Android 2. And then you're going to build one from scratch, and it's gonna hurt, it's gonna suck, you might walk out and leave, tell Mark you're never coming back but you're gonna know how to do this, all right. [00:00:25] This is like learning to build an app and stuff, it's really great if you have some code there and you go and do some problems but I think a build system, you just build it from scratch. You just have to know how to build it from scratch and you gotta figure this stuff out, run into the resources, which is the best way that I've learned how to manage the build systems. >> Speaker 2: So you don't recommend using a starter kit? >> Scott Moss: No, I do recommend using a starter kit after you already know what's going on. We're gonna use the starter kit. This build system that you're gonna use now, we're not gonna use it. We're gonna throw it away. But, it's gonna get you familiar with it, so when you use the starter kit, at least you'll know what's going on when an error's thrown. [00:01:00] Like I don't know what's throwing this error. Is it typescript, is it webpack, is it, I don't know what, who's throwing this error. This is gonna help you get familiar with that. So you can get familiar with your environment. Any questions? Okay, let's hop right into it then. [00:01:20] Cool, so if you want, you can try to copy some of this, but just challenge yourself, don't worry about any of this stuff, you are going to build it from scratch. I'll leave it up as a reference. Probably push up to get help, too. So what we're gonna do is I want everyone, it's finally working. [00:01:38] I was trying to get this linter to work for like forever and it's finally just kicked in. Sublime is so trivial, I don't know. So what I want everyone to do is just make a new folder, make a new repo, right. This is brand new. This is nothing to do with what you guys are doing with Luke, this is completely separate. [00:01:52] If you feel like you already got modules or you feel like you already got the build system down, you all know what you're doing. I don't know, then I guess you don't need to do it. But I'm guessing that it's pretty tough. It was tough for me. So just make a new folder and then inside that just make a new app folder. [00:02:08] And just put root.ts, and just write some TypeScript stuff in there, something that isn't gonna run in the browser, something that needs to be compiled. In my example, I just made an interface and a class to infamous interface. That's all I do. That's obviously not gonna run in the browser. [00:02:23] It obviously needs to be built. So that's what I put there. You can copy what I have if you want. But at minimum that's what you'll need for this. >> Scott Moss: And then what we're gonna do is we're just going to, I'm just gonna walk you through. Again, this is very minimum. [00:02:43] This is not like a production level built system. It's gonna be way more than this, but this is enough to get started and really all you need to get going. And then you can add, depending on your project and what you're doing or what your team's needs are, you can add and take away things. [00:02:59] But we're just gonna cover the full spectrum of the stuff that you're gonna be using. So first let's talk about probably the most important part, which is the bundler. Again we're using webpack. So the way webpack works is you just create a webpack.config.js on the root of your repository, webpack.config.js. [00:03:21] All right, and really, all that does is just exports a module, an object that has this config. There's a lot of unimportant stuff in here, but really, the only stuff you really need to know about as far as getting it started, is the entry. Because unlike loading script tags in the browser where there isn't a single point of entry there, you're just loading everything up, in a module LAN there's a single point of entry and everything's a tree. [00:03:46] There's a dependency tree of modules. So you need to say, hey, where's the root? In this case, it's that file you just made, that's the root. >> Scott Moss: So that's important. Also this resolve block here, this is telling webpack that, hey, these are the types of files that you can expect to see. [00:04:03] Just letting you know, by the way, just in case, also the other important thing here is the output, where to place the bundled files. So the path, which is the build folder, which you do not have to create, webpack will create for you if it's not there already, and then the name of the bundled file. [00:04:21] This bundle.js which is what most people use in webpack, but it can be anything you want. >> Scott Moss: And then the meat and the potatoes are right here, this module object, which who here has used Goat? Okay, so this is pretty much like a plugin place where you would do your goat plugins. [00:04:44] So loaders are like plugins that transform your modules. So we have this loader called awesome-typescript-loader whose job is to transform any file that has a ts extension on it, excluding these files. >> Scott Moss: Obviously you don't need to put this here because you don't have this in your repo, but I just wanted to show you an example of what exclude does. [00:05:08] So this is just a regex. You can put file name, you can put whatever you want here, but a regex will obviously be more greedy and capture the stuff that you really want. And then this loader is an MPM module that was loaded. So this isn't just some random thing that I installed. [00:05:21] You can also just exclude the loader part and it'll still work, you don't have to put -loader. It's kind of like back in the days of Grunt where Grunt didn't use require, instead you just had to give it a name and it figured it out. That's what this thing is doing. [00:05:36] If you go look in my node modules, awesome-typescript-loader's right there. So it's coming from there. So it's not explicit like Gulp where you have to require it. It's being a little implicit here. So yeah, that's saying awesome-typescript-loader. If you look at this documentation it's Java just to compile your typescript for you. [00:05:55] That's it. But I only wanna do it on these files. Cool, and then this devServer here, again this stuff is not really important as far as getting started because you can really use whatever you want to serve it. But if you wanna use the webpack devServer, this is just an option that's saying, hey, here's your API fallback. >> Speaker 3: Just a quick question on, any reason for using AwesomeTypeScript versus TS Loader? >> Scott Moss: I was using TS Loader for a while but then AwesomeTypeScript, I heard about it, and it was just awesome. So I switched over to it. I mean, I will just show you why I use it and then whoever asked that question can probably see for themself. [00:06:32] The first line that it says is the best TypeScript loader for webpack. >> Speaker 4: So you have to use it, it's the best one >> Scott Moss: That's why I use it, it's the best one. Yeah, that was it. So if somebody comes out with another one and said it's even better than the best, I'll probably switch over to that one. >> Speaker 3: What are they gonna do when they come out with the awesomer TypeScript loader? >> Scott Moss: I don't know. I don't know what's gonna happen. >> Speaker 2: The bestest? [LAUGH] >> Scott Moss: And I've had issues with the TS loader, where I had to do some manual configuration like silencing different errors. [00:07:07] It was just really fine grain whereas Awesome TypeScript kind of just knows what I want, and it's here, man, here, I know what you want here. Whereas TS Loader is I don't know. So yeah, Awesome TypeScript, but TS Loader will work, too, I've used it before, it totally works. >> Speaker 3: For this example you're going through right now there's no repo to start from UI. >> Scott Moss: No, there's no repo to start from. This is from scratch. >> Speaker 6: But it's not cuz you already typed most of this. >> Scott Moss: I did, and I said you shouldn't be copying this. [00:07:35] But if you're not confident about yourself, you can totally copy this but it's still probably not gonna work. Most people are still gonna run some error, that's the whole point. I want you to write us an error, just be like, what? What's going on? That's the whole point. >> Speaker 7: Why are you using webpack to run the compiler instead of, say, Gold 4 MPM? >> Scott Moss: Good question, I love Gold. I use Gold for everything. I actually use Gold with webpack. So the reason I'm not doing it is one, I don't like putting everything in MPM scripts. [00:08:06] It's just my opinion, but you totally could. Most of these tools here have command line options that you can just pass MPM scripts and just run it, boom, easily. Gulp, Gulp is awesome, too, obviously. And there's also plugins for that. But what about modules? That's the biggest thing about webpack. [00:08:23] It's going to poly fill CommonJs for you in the browser whereas Gulp and MPM Scripts won't by default. You gotta build that stuff yourself. So that's the biggest reason right there, it's just the modules. And because I'm already using webpack for modules, I might as well use it for everything else, too. [00:08:38] But Gulp is still useful because yeah, webpack is great for doing everything related to your files and assets, but what about like one-off tasks that you need, like a deployment task or a create documentation task or something like that? I usually use Gulp for that. And so what I do is I have Gulp orchestrate webpack for me. [00:08:58] That way I always interface with Gulp and Gulp handles everything. But there are arguments against that, whereas like, webpack can build docs, too,. Yeah, it can. It can pretty much do anything, but, again, it's your choice. I use them both and it works fine. So, this is the bundler. [00:09:14] And then typing's, do not copy this, this was generated. Remember, typing's is just a JSON file that's just going to tell your build to, hey, these are the type of definition files that I'm aware of and I'm gonna give you really cool tools to use when you use these files, pretty much. [00:09:34] You almost never have to come into this file and type stuff in manually. You use the command line tool typings. Do --help and that's really all you need. It will tell you exactly what to do. Tslint is exactly what it sounds like. It's a linter for typescript. I don't know what everybody's using here, but I'm sure most major IDEs and text editors have a linter for typescript. [00:09:55] And if you don't, just switch over to one that has one like Atom or Sublime, WebStorm, I think even Visual Studio Code has one now. So you should be totally fine with that. Another TS file, that's like three of them so far, it's crazy, all these TypeScript files. [00:10:13] This third one here is to how we configure the compiler. So what is actually reading this? Well if you go to webpack and look at the Awesome TypeScript loader, this Awesome TypeScript loader is using the typescript command line tool, which is using this TS Config adjacent file. This is, what, how would you know that? [00:10:31] Yeah, so this is how you configure the compiler. You're giving it options like, what am I targeting? Obviously, we're targeting ES5 because you want this to run in a browser. We can change this to target ES6, and it will compile down to ES6. But then it wouldn't work in the browser, right. [00:10:47] So we're gonna target ES5. What modules do we wanna use? There's different types of modules out there that I talked about. There's systemJS which is what JSPM uses, which is actually the standard, which is the real thing. There's requireJS. There's so many other modules. And if you're using JSPM you can make your own module system. [00:11:06] So there's literally an infinite amount of different type of modules you can use. It's crazy. But we're using CommonJs. Which is not requiredJS. Although CommonJs use is required. So, if you got those two mixed up, they're not the same. Okay, it took me a while to figure that out a long time ago. [00:11:22] And then all this other stuff that's really not that important. This is just configuring the compiler. Again, you don't actually have to touch this either. There is a command line tool for TSE. If you don't have that, you can MPM install globally TSE. Then you do --help on that and it'll also set this up for you. >> Scott Moss: And that's really about it. So, once you get, again, the only stuff you really need set up is this webpack thing here and then a file to build it. All this other stuff, I do recommend getting it set up though. You need to learn how to set up. [00:11:57] Because the beauty of TypeScript is the tooling. So if all you have is a bundler set up and you are not really getting the autocomplete and the linting and the recommendations and the pathfinding, then what's the point? So I do recommend setting up this other stuff and that's what the point of this exercise is, is getting your environment set up so you feel comfortable with it and figuring out all the tools. [00:12:16] Like I said, everybody's on a different environment so there's gonna be different things but most of it usually has these four files, the ts config which configures a compiler, the lint service configures a linter. Typings which configure where are type definition files are located and our webpack. And this is what typings looks like. [00:12:39] If you use the typings command line, and so when you start downloading stuff, It creates this folder and just adds all this stuff in here, all types of crazy stuff. And then it will link them up to your typing JS. I just deleted everything out of mine to confuse you all. [00:12:55] So that's why mine is empty.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00257.warc.gz
CC-MAIN-2023-40
14,094
61
https://feedback.photoshop.com/photoshop_family/topics/ui-panels-not-rendering-in-catalina-upgrade
code
This conversation has been merged. Please reference the main conversation: Photoshop 2020: Switching tabs doesn't show the correct image I have contacted Adobe Customer Support and they have continually told me to delete my preference files (Illustrator) or turn on/off Legacy compositing (Photoshop). Despite doing these, neither has fixed the problem. How can this be solved for good? It's highly disruptive to my workflow.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00390.warc.gz
CC-MAIN-2020-29
425
2
https://community.adobe.com/t5/premiere-pro/changing-file-names-and-keeping-xml-files/td-p/12076429
code
How do I change my file names and keep my xml files attached to the clip? Does Premiere even make use of the xml files? Are you exporting an XML to another app, or are you importing an XML into PrPro? Neither. Currently I import a batch of clips directly into PP and I get a mesage that the xml files could not import or some such thing. I click okay and move on to edit, no problems. This leads me to believe that PP doesn't need these files and I wonder if I really need to keep them like many people say I should. I want to rename my video files before importing into PP, the xml names doesn't change along with it and PP won't import it anyway so what is the point of keeping them? Okay, it's an XML file for a video clip or a still file? What created that XML? Video clip, Sony a7riii. The XML file is created with the clip. This is what the files look like and when I select all to import the xml do not get imported. That's a difficult thing for me to sort, not having any experience with what Sony puts in those XML files. Adobe apps don't always see non-Adobe created XMLs though, that's come up several times. @mattchristensen ... any idea what PrPro can do with Sony camera-created XML files?
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00341.warc.gz
CC-MAIN-2021-25
1,203
8
https://hitechonlife.wordpress.com/tag/surfacepro/
code
Switching from a Mac to a PC is an interesting thing to do. When I made the jump it was to a Surface Pro 4. Listen to my thoughts on making the switch, and comparison to previous Macs from the past. EDC? Try EDT! Every Day Tech! In episode 10 I go through all the tech I encounter on a daily basis. Part 1 is the Known tech. Tomorrow is the Unknown!
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347417746.33/warc/CC-MAIN-20200601113849-20200601143849-00235.warc.gz
CC-MAIN-2020-24
349
2
https://bitmap2lcd.com/blog/glcd-data-array-output-options/
code
Bitmap2LCD is a tool for programming small Graphic LCDs in embedded systems. Output of the GLCD data array in a binary file The converter data output feeds a built in hexadecimal editor and is saved to disk in a binary file (*.hex) Output of the GLCD data array in a text file for the compiler or assembler, format 8, 16 or 32 bit data The converter data output feeds a built in text editor and can saved to disk in a text file ( *.c , *.h , *.asm , “.lib etc..) Next to the data file, a rich text file (*.rtf) which contains all GLCD conversion parameters is written to disk, in the user defined “Documents folder”.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00034.warc.gz
CC-MAIN-2021-04
622
7
https://blog.bar-solutions.com/?p=239
code
I ran into an issue at a customer site where certain triggers were disabled in the database where they should be enabled. It appeared that an update script, that is run every night, first disables all triggers on a couple of tables. Then does what it needs to do, without the overhead of the trigger code. and then, when it’s done, it enables all triggers again. If the update code fails for whatever reason, then the re-enabling of the triggers is not performed, leaving the triggers in disabled state on the database, which can cause problems in everyday use. What you would actually want is a situation where you can disable the triggers, but just for the current session. Any other session should have the triggers enabled at all times. Unfortunately Oracle doesn’t support this kind of enabling/disabling of triggers. We do however have access to all the possibilities of PL/SQL in triggers so we can build a solution to this problem ourselves. I have come up with a semaphore package which allows me to set and clear a flag or semaphore which I can then check in the trigger code and then, depending on the value of this semaphore execute or skip the code in the trigger. I have also used the knowledge I posted here about a boolean that’s really an integer. CREATE OR REPLACE PACKAGE bar_semaphore IS -- Author : Patrick Barel -- Public function and procedure declarations FUNCTION sem_emp RETURN BOOLEAN; PROCEDURE set_sem_emp; PROCEDURE clr_sem_emp; END bar_semaphore; CREATE OR REPLACE PACKAGE BODY bar_semaphore IS -- Private variable declarations g_emp PLS_INTEGER; -- Function and procedure implementations PROCEDURE MINVALUE( p_variable_in_out IN OUT PLS_INTEGER , p_min_value_in IN PLS_INTEGER DEFAULT 0 ) IS BEGIN IF p_variable_in_out < p_min_value_in THEN p_variable_in_out := p_min_value_in; END IF; END MINVALUE; PROCEDURE initialization IS BEGIN g_emp := 0; END initialization; FUNCTION sem_emp RETURN BOOLEAN IS BEGIN RETURN( g_emp > 0 ); END sem_emp; PROCEDURE set_sem_emp IS BEGIN g_emp := g_emp + 1; END set_sem_emp; PROCEDURE clr_sem_emp IS BEGIN g_emp := g_emp - 1; MINVALUE( g_emp, 0 ); END clr_sem_emp; BEGIN initialization; END bar_semaphore; CREATE OR REPLACE TRIGGER tr_emp_bri BEFORE INSERT ON emp FOR EACH ROW DECLARE -- local variables here BEGIN IF NOT( bar_semaphore.sem_emp ) THEN IF :NEW.ename IS NULL THEN :NEW.ename := '<EMPTY>'; END IF; END IF; END tr_emp_bri; CREATE OR REPLACE TRIGGER tr_emp_bru BEFORE UPDATE ON emp FOR EACH ROW DECLARE -- local variables here BEGIN IF NOT( bar_semaphore.sem_emp ) THEN IF :NEW.ename IS NULL THEN :NEW.ename := '<EMPTY>'; END IF; END IF; END tr_emp_bru; clear screen set serveroutput on select * from emp; update emp set ename = null where empno = 7934; select * from emp; rollback; select * from emp; exec bar_semaphore.set_sem_emp; update emp set ename = null where empno = 7934; select * from emp; rollback; exec bar_semaphore.clr_sem_emp; update emp set ename = null where empno = 7934; select * from emp; rollback; select * from emp; exec bar_semaphore.set_sem_emp; update emp set ename = null where empno = 7934; select * from emp; rollback; connect scott/tiger select * from emp; update emp set ename = null where empno = 7934; select * from emp; The output of the script shows that when a new session is connected and the semaphore is set in the other session, disabling the triggers, this has no influence on the currently connected session. It is as if the semaphore is never set and the triggers function as normal. This way the triggers can be turned off for certain scripts, or pieces of code, that don’t want to be bothered by the code in the trigger, while other session still have this code in place and still have it turned on. You can either choose to have a single semaphore in a package, or a single semaphore for every table or even a semaphores for different functions in the code. This way execution can be turned off and on exactly the way you want it.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00788.warc.gz
CC-MAIN-2022-49
3,962
10
http://www.jensbits.com/2012/03/28/tracking-multiple-domains-individually-and-as-a-group-in-google-analytics/
code
Website managers who track multiple top-level domains individually but also want to track some or all of them as a group in one profile can do so with the proper tracking code installed. Say you have multiple sites that you manage: site1.com, site2.com, and site3.com. You want each to have their own analytics profile that the site owner’s can access. You also want to dump all three sites into a single profile as an aggregate. Useful for saying things like, “I manage three site that get X amount of traffic.” Or, for seeing the totals on web stats that all sites impact. For example, sports-car-shirts.com and ferrari-shirts.com. How many ferrari shirts were sold on both sites? Easy to determine if they dump into the same profile. Create the Accounts Google Analytics (GA) consists of Accounts which contain one or more Profiles. Accounts have an alphanumeric account ID associated with them in the form of UA-1233456-1. This is placed in the tracking code as Profiles have a numeric profile ID that is sometimes referred to as the table ID. It can be found by going to Admin (upper left) and clicking on Profile Settings under the profile name. It will be listed as Profile ID. The Profile ID is not needed to set up the tracking. Account ID (tracking code Profile ID (also known as table ID): 12345678 One Account can have multiple Profiles with distinct Profile IDs. When you create a new GA Account, always, always, always create 2 profiles. That goes for every Account you set up. Create a base or “raw” profile and don’t ever touch it again. Look but don’t touch. Create a second “default” profile that you apply filters to and perform all other sorts of magic. If the “default” profile ever gets messed up by a filter or other unintended consequences, you still have all your data in the “raw” profile. Profiles are the “buckets” that data gets dumped into. They can contain all the data or a portion based on filters. Using the example of three (3) separate sites, we will have four (4) separate accounts each with two (2) profiles in them (raw and default). One Account for each of the sites and one for the aggregate. Sounds like a lot but don’t worry; this is easy and quick. The site owners will get access to the “default” profile of the account respective to their website. For each of the three sites, add the tracking code with multiple trackers. One will track the site individually, one will dump into the aggregate account. var _gaq = _gaq || ; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script'); s.parentNode.insertBefore(ga, s); The multiple tracker puts the data in the site’s profile first and then puts the data into the aggregate account. Place this code on all three accounts swapping out the site’s account number as appropriate. Once it is all set up, put a filter on the “default” profile of the aggregate account to display the domain name in the reports. Leave the “raw” profile of all the accounts alone. This will help you distinguish pages with the same name in the reports. For example, site1.com/contact.html and site2.com/contact.html. See Modify your cross-domain profile with a filter to show the full domain in your content reports. Check that data is being tracked by checking the “Tracking Status” in the Account Admin (Admin -> Tracking Code). Check the data received in each account and profile the next day to make sure the appropriate data is being dumped where it should be and that you hostname filter in the aggregate profile is working properly. If this post helped you out, please consider donating to help pay the hosting fees. 100% of the donations go to the web host.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00165-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
3,790
24
http://dotnet.sys-con.com/node/2274497
code
|By Yung Chou|| |May 31, 2012 11:00 AM EDT|| In Windows Server 2008 R2 (WS2008R2), Terminal Services (TS) has been expanded and renamed to Remote Desktop Services (RDS). RDS is the backbone of Microsoft's VDI solutions. And in Windows Server 2012, RDS is further enhanced and with a scenario-based configuration wizard. Still the concept and architecture remain very much the same since WS2008R2. The new and enhanced architecture takes advantage of virtualization and makes remote access a much flexible solution with new deployment scenarios. To realize the capabilities of RDS, it is essential to understand the functions of key architectural components and how they complement one another to process a RDS request. There are many new terms and acronyms to get familiar with in the context of RDS. For the remainder of this post, notice RDS implies the server platform of WS2008R2 and later, while TS implies WS2008. There are five main architectural components in RDS, as shown, and all require a RDS licensing server. Each component includes a set of features designed to achieve particular functions. Together, the five form a framework for accessing Terminal Services applications, remote desktops, and virtual desktops. Essentially, WS2008R2 offers a set of building blocks with essential functions for constructing enterprise remote access infrastructure. To start, a user will access a RDS webpage by specifying an URL where RDS resources are published to. This interface, provided by Remote Desktop Web Access (RDWA) and configured with a local IIS with SSL, is the web access point to RemoteApp and VDI. The URL is consistent regardless how resources are organized, composed, and published from multiple RDS session hosts behind the scene. By default, RDS publishes resources at https://the-FQDN-of-a-RDWA-server/rdweb and this URL is the only information a system administrator needs to provide to a user for accessing authorized resources via RDS. A user will need to be authenticated with one's AD credentials when accessing the URL and the RemoteApp programs presented by this URL is trimmed with access control list. Namely, an authenticated user will see and be able to access only authorized RemoteApp programs. Remote Desktop Gateway (RDG) is optional and functions very much the same with that in TS. A RDG is to be placed at the edge of a corporate network to filter out incoming RDS requests by referencing criteria defined in a designated Network Policy Server (NPS). With a server certificate, RDG offers secure remote access to RDS infrastructure. As far as a system administrator is concerned, RDG is the boundary of a RDS network. There are two policies in NPS relevant to an associated RDG: - One is Connection Authorization Policy or CAP. I call it a user authorization list, showing who can access an associated RDG - The other is Resource Authorization Policy or RAP. In essence, this is a resource list specifying which devices a CAP user can connect to via an associated RDG. In RDS, applications are installed and published in a Remote Desktop Session Host (RDSH) similar to a TS Session Host, or simply a Terminal Server in a TS solution. A RDSH loads applications, crunches numbers, and produces results. It is our trusted and beloved working horse in a RDS solution. Digital signing can be easily enabled in a RDSH with a certificate. Multiple RDSHs can be deployed along with a load balancing technology. Which requires every RDSH in a load-balancing group to be identically configured with the same applications. A noticeable enhancement in RDSH (as compared with TS Session Host) is the ability to trim the presence of a published application based on the access control list (ACL) of the application. An authorized user will see, hence have an access to, only published applications of which the user is authorized in the ACL. By default, the Everyone group is included in a published application's ACL, and all connected user will have access to a published application. Remote Desktop Virtualization Host (RDVH) is a new feature which serves requests for virtual desktops running in virtual machines, or VMs. A RDVH server is a Hyper-V based host, for instance a Windows Server with Hyper-V server role enabled. When serving a VM-based request, an associated RDVH will automatically start an intended VM, if the VM is not already running. And a user will always be prompted for credentials when accessing a virtual desktop. However, a RDVH does not directly accept connection requests and it uses a designated RDSH as a "redirector" for serving VM-based requests. The pairing of a RDVH and its redirector is defined in Remote Desktop Connection Broker (RDCB) when adding a RDVH as a resource. Remote Desktop Connection Broker (RDCB), an expansion of the Terminal Services Session Broker in TS, provides a unified experience for setting up user access to traditional TS applications and virtual machine (VM)-based virtual desktops. Here, a virtual desktop can be running in either a designated VM, or a VM dynamically picked based on load balancing from a defined VM pool. A system administrator will use the RDCB console, called Remote Desktop Connection Manager, to include RDSHs, TS Servers, and RDVHs such that those applications published by the RDSHs and TS Servers, and those VMs running in RDVHs can be later composed and presented to users with a consistent URL by RDWA. And with this consistent URL, authenticated users can access authorized RemoteApp programs and virtual desktops. A Remote Desktop (RD) Client gets connection information from the RDWA server in a RDS solution. If a RD client is outside of a corporate network, the client connects through a RDG. If a RD client is internal, the client can then directly connect to an intended RDSH or RDVH once RDCB provides the connection information. In both cases, RDCB plays a central role to make sure a client gets connected to a correct resource. With certificates, a system administrator can configure digital signing and single sign-on among RDS components to provide a great user experience with high security. Conceptually, RDCB is the chief intelligence and operation officer of a RDS solution and knows which is where, whom to talk to, and what to do with a RDS request. Before a logical connection can be established between a client and a target RDSH or RDVH, RDCB acts as a go-between passing and forwarding pertinent information to and from associated parties when serving a RDS request. From a 50,000-foot view, a remote client uses RDWA/RDG to obtain access to a target RDSH or RDVH, while RDCB connects the client to a session on the target RDSH, or an intended VM configured in a target RDVH. Above is a RDS architecture poster with visual presentation on how all flow together. Http://aka.ms/free has number of free e-books and this poster for additional information of WS2008R2 Active Directory, RDS, and other components. The configuration in WS2008 is a bit challenging with many details easily overlooked. Windows Server 2012 greatly improved the user experience by facilitating the configuration processes with a scenario-based wizard. Stay tuned and I will further discuss this in an upcoming blog post series. [This is a cross-posting from http://blogs.technet.com/yungchou.] SYS-CON Events announced today that Cisco, the worldwide leader in IT that transforms how people connect, communicate and collaborate, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cisco makes amazing things happen by connecting the unconnected. Cisco has shaped the future of the Internet by becoming the worldwide leader in transforming how people connect, communicate and collaborate. Cisco and our partners are building the platform for the Internet of Everything by connecting the... Mar. 26, 2015 07:00 PM EDT Reads: 4,981 The WebRTC Summit 2014 New York, to be held June 9-11, 2015, at the Javits Center in New York, NY, announces that its Call for Papers is open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 16th International Cloud Expo, @ThingsExpo, Big Data Expo, and DevOps Summit. Mar. 26, 2015 06:45 PM EDT Reads: 1,016 15th Cloud Expo, which took place Nov. 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, expanded the conference content of @ThingsExpo, Big Data Expo, and DevOps Summit to include two developer events. IBM held a Bluemix Developer Playground on November 5 and ElasticBox held a Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of Bluemix, its services and functionality and provide short-term introductory projects that developers can complete between sessions. Mar. 26, 2015 06:30 PM EDT Reads: 4,582 Temasys has announced senior management additions to its team. Joining are David Holloway as Vice President of Commercial and Nadine Yap as Vice President of Product. Over the past 12 months Temasys has doubled in size as it adds new customers and expands the development of its Skylink platform. Skylink leads the charge to move WebRTC, traditionally seen as a desktop, browser based technology, to become a ubiquitous web communications technology on web and mobile, as well as Internet of Things compatible devices. Mar. 26, 2015 06:00 PM EDT Reads: 1,625 SYS-CON Events announced today that robomq.io will exhibit at SYS-CON's @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. robomq.io is an interoperable and composable platform that connects any device to any application. It helps systems integrators and the solution providers build new and innovative products and service for industries requiring monitoring or intelligence from devices and sensors. Mar. 26, 2015 06:00 PM EDT Reads: 1,191 The list of ‘new paradigm’ technologies that now surrounds us appears to be at an all time high. From cloud computing and Big Data analytics to Bring Your Own Device (BYOD) and the Internet of Things (IoT), today we have to deal with what the industry likes to call ‘paradigm shifts’ at every level of IT. This is disruption; of course, we understand that – change is almost always disruptive. Mar. 26, 2015 05:15 PM EDT Reads: 778 WebRTC is an up-and-coming standard that enables real-time voice and video to be directly embedded into browsers making the browser a primary user interface for communications and collaboration. WebRTC runs in a number of browsers today and is currently supported in over a billion installed browsers globally, across a range of platform OS and devices. Today, organizations that choose to deploy WebRTC applications and use a host machine that supports audio through USB or Bluetooth can use Plantronics products to connect and transit or receive the audio associated with the WebRTC session. Mar. 26, 2015 05:00 PM EDT Reads: 1,454 Docker is an excellent platform for organizations interested in running microservices. It offers portability and consistency between development and production environments, quick provisioning times, and a simple way to isolate services. In his session at DevOps Summit at 16th Cloud Expo, Shannon Williams, co-founder of Rancher Labs, will walk through these and other benefits of using Docker to run microservices, and provide an overview of RancherOS, a minimalist distribution of Linux designed expressly to run Docker. He will also discuss Rancher, an orchestration and service discovery platf... Mar. 26, 2015 04:15 PM EDT Reads: 2,269 Sonus Networks introduced the Sonus WebRTC Services Solution, a virtualized Web Real-Time Communications (WebRTC) offer, purpose-built for the Cloud. The WebRTC Services Solution provides signaling from WebRTC-to-WebRTC applications and interworking from WebRTC-to-Session Initiation Protocol (SIP), delivering advanced real-time communications capabilities on mobile applications and on websites, which are accessible via a browser. Mar. 26, 2015 04:00 PM EDT Reads: 1,593 SYS-CON Events announced today that Aria Systems, the leading innovator in recurring revenue, has been named “Bronze Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Proven by the world’s most demanding enterprises, including AAA NCNU, Constant Contact, Falck, Hootsuite, Pitney Bowes, Telekom Denmark, and VMware, Aria helps enterprises grow their recurring revenue businesses. With Aria’s end-to-end active monetization platform, global brands can get to market faster with a wider variety of products and services, while maximizin... Mar. 26, 2015 04:00 PM EDT Reads: 1,435 SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here Mar. 26, 2015 04:00 PM EDT Reads: 1,541 SYS-CON Events announced today that Alert Logic, the leading provider of Security-as-a-Service solutions for the cloud, has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® and DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo® and DevOps Summit 2015 Silicon Valley, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Mar. 26, 2015 04:00 PM EDT Reads: 1,444 Wearable technology was dominant at this year’s International Consumer Electronics Show (CES) , and MWC was no exception to this trend. New versions of favorites, such as the Samsung Gear (three new products were released: the Gear 2, the Gear 2 Neo and the Gear Fit), shared the limelight with new wearables like Pebble Time Steel (the new premium version of the company’s previously released smartwatch) and the LG Watch Urbane. The most dramatic difference at MWC was an emphasis on presenting wearables as fashion accessories and moving away from the original clunky technology associated with t... Mar. 26, 2015 03:30 PM EDT Reads: 869 SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments. Mar. 26, 2015 03:30 PM EDT Reads: 2,030 SYS-CON Events announced today that Solgenia will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between Personal and Professional Social, Mobile and Cloud user experiences, our solutions help large and medium-sized organizations dr... Mar. 26, 2015 03:00 PM EDT Reads: 2,508 SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster. Mar. 26, 2015 03:00 PM EDT Reads: 3,294 Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e... Mar. 26, 2015 02:45 PM EDT Reads: 4,624 SYS-CON Events announced today that Akana, formerly SOA Software, has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Akana’s comprehensive suite of API Management, API Security, Integrated SOA Governance, and Cloud Integration solutions helps businesses accelerate digital transformation by securely extending their reach across multiple channels – mobile, cloud and Internet of Things. Akana enables enterprises to share data as APIs, connect and integrate applications, drive part... Mar. 26, 2015 02:15 PM EDT Reads: 1,299 SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and sim... Mar. 26, 2015 02:00 PM EDT Reads: 1,382 Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. Join SoftLayer on June 9 at 16th Cloud Expo to learn about IBM Cloud's SoftLayer platform, explore se... Mar. 26, 2015 02:00 PM EDT Reads: 1,461
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131293283.10/warc/CC-MAIN-20150323172133-00027-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
18,959
56
http://java.sys-con.com/node/2458045
code
|By Sunil Pathak|| |November 30, 2012 08:45 AM EST|| As per Wikipedia, a typical supply chain is a system of organizations, people, technology, activities, information and resources involved in moving a product or service from supplier to customer. Supply chain activities transform natural resources, raw materials and components into a finished product / service that is delivered to the end customer. In sophisticated market systems, used products may re-enter the supply chain at any point where residual value is recyclable. Supply chains link value chains. As per Wikipedia, a typical supply chain is a system of organizations, people, technology, activities, information and resources involved in moving a product or service from supplier to customer. Supply chain activities transform natural resources, raw materials and components into a finished product / service that is delivered to the end customer. In sophisticated market systems, used products may re-enter the supply chain at any point where residual value is recyclable. Supply chains link value chains. Value chains outline the activities involved in creating value from the supply side of economics - where raw materials are used to manufacture a product / service - to the demand side when finished products or components are marketed and shipped to re-sellers or end users. The value chain proposition in cloud computing is as simple as depicted in the figure below: Currently, in the cloud market, the value creation activity is limited to a specific segment of the industry value chain (It's changing very fast though). There are huge opportunities when we talk about the value system as a whole because the linkages are not just a compilation of activities that are independent of each other but it is a intricate system of activities that are highly interdependent because they are related by their linkages (often multi-dimensional). Through these linkages, the performance of one activity affects the cost / performance / value of another. The cloud brokers business is all about exploiting these linkages and adding value to these linkages. Cloud market provides a unique opportunity in these linkages. The "cloud broker" model, as discussed above, can be applied to various stages of the cloud value chain. Cloud brokerages bring together buyers and sellers of cloud services. The cloud broker market is being accelerated by the emergence of solutions that make it quick and easy to implement. Some of the activities that brokers in a cloud ecosystem are actively engaging in are: - Facilitate and operate business-to-business (B2B) transactions - Facilitate and operate business-to-consumer (B2C) transactions - Facilitate and operate consumer-to-consumer (C2C) markets across the complete value chain. - Act as service aggregators by bringing business owners and consumers together to get better cost / service - Acts as metamediaries by not only bring interested parties together, but also provide different services related to the actual transaction, such as billing or order tracking, support and maintenance etc. - Technology and process integration across various cloud and business services - Cloud service intermediation: Offer intermediation for multiple services to add value-adds like identity management or access management. - Management and operational services This not only brings interested parties together, but also provides different services related to the actual transaction, such as billing or order tracking, support and maintenance, etc. Cloud brokers make a very strong case for faster cloud adoption and unifying the market. Other benefits such as cost reduction, better and faster discovery, lower transaction costs, finding new business, supply chain efficiency, monitoring demand and market trends are common. The big guns of the industry are already aware and are now trying to integrate and expand in all directions (vertically, horizontally and via alliances) which is in principal similar to the broker model. Whether they will be able to replace the brokers or they will lose their core competence remains to be seen (I will discuss this in a future blog...) Bottom-line is that Brokers make markets in any industry and Cloud computing is no exception. Period. "We work in the area of Big Data analytics and Big Data analytics is a very crowded space - you have Hadoop, ETL, warehousing, visualization and there's a lot of effort trying to get these tools to talk to each other," explained Mukund Deshpande, head of the Analytics practice at Accelerite, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY. Jul. 2, 2016 01:30 AM EDT Reads: 827 Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2016' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited t... Jul. 2, 2016 01:00 AM EDT Reads: 734 Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to... Jul. 1, 2016 08:30 PM EDT Reads: 803 IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effi... Jul. 1, 2016 06:30 PM EDT Reads: 777 Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2016 Silicon Valley. The 6thInternet of @ThingsExpo will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Jul. 1, 2016 06:00 PM EDT Reads: 548 Presidio has received the 2015 EMC Partner Services Quality Award from EMC Corporation for achieving outstanding service excellence and customer satisfaction as measured by the EMC Partner Services Quality (PSQ) program. Presidio was also honored as the 2015 EMC Americas Marketing Excellence Partner of the Year and 2015 Mid-Market East Partner of the Year. The EMC PSQ program is a project-specific survey program designed for partners with Service Partner designations to solicit customer feedbac... Jul. 1, 2016 05:30 PM EDT Reads: 793 The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profession... Jul. 1, 2016 05:15 PM EDT Reads: 649 "There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci... Jul. 1, 2016 04:15 PM EDT Reads: 263 Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ... Jul. 1, 2016 03:15 PM EDT Reads: 330 Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more. In his session at @Th... Jul. 1, 2016 02:00 PM EDT Reads: 1,434 Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products... Jul. 1, 2016 01:15 PM EDT Reads: 354 The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ... Jul. 1, 2016 01:00 PM EDT Reads: 707 There are several IoTs: the Industrial Internet, Consumer Wearables, Wearables and Healthcare, Supply Chains, and the movement toward Smart Grids, Cities, Regions, and Nations. There are competing communications standards every step of the way, a bewildering array of sensors and devices, and an entire world of competing data analytics platforms. To some this appears to be chaos. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Bradley Holt, Developer Advocate a... Jul. 1, 2016 01:00 PM EDT Reads: 1,053 Apixio Inc. has raised $19.3 million in Series D venture capital funding led by SSM Partners with participation from First Analysis, Bain Capital Ventures and Apixio’s largest angel investor. Apixio will dedicate the proceeds toward advancing and scaling products powered by its cognitive computing platform, further enabling insights for optimal patient care. The Series D funding comes as Apixio experiences strong momentum and increasing demand for its HCC Profiler solution, which mines unstruc... Jul. 1, 2016 12:30 PM EDT Reads: 693 SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne... Jul. 1, 2016 12:00 PM EDT Reads: 664 In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack. Jul. 1, 2016 10:45 AM EDT Reads: 583 Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp... Jul. 1, 2016 10:30 AM EDT Reads: 1,114 Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe... Jul. 1, 2016 10:00 AM EDT Reads: 538 The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh... Jul. 1, 2016 09:30 AM EDT Reads: 1,208 The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ... Jul. 1, 2016 09:00 AM EDT Reads: 709
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00076-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
14,755
56
https://mail.python.org/pipermail/python-list/2007-July/464948.html
code
[2.5] Regex doesn't support MULTILINE? nospam at nospam.com Sun Jul 22 06:56:32 CEST 2007 On Sat, 21 Jul 2007 22:18:56 -0400, Carsten Haese <carsten at uniqsys.com> wrote: >That's your problem right there. RE is not the right tool for that job. >Use an actual HTML parser such as BeautifulSoup Thanks a lot for the tip. I tried it, and it does look interesting, although I've been unsuccessful using a regex with BS to find all occurences of the pattern. Incidently, as far as using Re alone is concerned, it appears that re.MULTILINE isn't enough to get Re to include newlines: re.DOTLINE must be added. Problem is, when I add re.DOTLINE, the search takes less than a second for a 500KB file... and about 1mn30 for a file that's 1MB, with both files holding similar contents. Why such a huge difference in performance? ========= Using Re ============= pattern = "<span class=.?defaut.?>(\d+:\d+).*?</span>" pages = ["500KB.html","1MB.html"] #Veeeeeeeeeeery slow when parsing 1MB file ! p = re.compile(pattern,re.IGNORECASE|re.MULTILINE|re.DOTALL) #p = re.compile(pattern,re.IGNORECASE|re.MULTILINE) for page in pages: f = open(page, "r") response = f.read() start = time.strftime("%H:%M:%S", time.localtime(time.time())) print "before findall @ " + start packed = p.findall(response) for item in packed: More information about the Python-list
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00229.warc.gz
CC-MAIN-2019-35
1,343
31
https://gsoc-mcabber.blogspot.com/2010/08/
code
Hi everyone ! As you all know, GSOC 2010 has ended. I'm proud to announce that file transfer is working. The only finished transport module is In-Band Bytestreams, so speed is not optimal, but it's not a problem for small files. I started the SOCKS5 Bytestreams module, but it's a work in progress and is not usable yet. I wrote a README file that contains all instructions to compile and use the module and started documenting the code, so you can generate the doc using doxygen. You can still access the git repository here or download a tarball of the most recent revision at http://github.com/alkino/mcabber-jingle/tarball/master.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00474.warc.gz
CC-MAIN-2021-21
634
5
https://slima11.newgrounds.com/
code
Welcome to Slima Quest ! Play as the awesome SLIMA Press A to Select or do Actions ! Press Left and Right to MOVE !! (woh movement!) Press _ to JUMP ! (jumpy) Go through many intresting and unique levels full of charm and personality !! This game SCREAMS originality !! (trust me) Play now or face the concequences of your actions. Get it on GameJolt Oh yeah uh, the game is in HTML so you need a browser to play it. I tried making it into an EXE file but it didn't go so well (it was really slow). Oh also I can't put the game on newgrounds becuase copyright infridgment is through the roof with this one.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510259.52/warc/CC-MAIN-20230927035329-20230927065329-00754.warc.gz
CC-MAIN-2023-40
606
12
https://forum.dhtmlx.com/t/dhx-grid-multiline-and-splitat/25625
code
I’m trying to enable both multiline and splitAt in a grid. During the initialisation, everything works perfectly, as I load the content before splitting the rows. The problem occurs when I try to edit a cell. In my cells, I have some html . Sometime, I need to add a to a cell, therefore increasing the size of the line. This size in increased properly in the part of the grid where the content is added but not in the other, causing a gap between the two parts. Here are some screens : Any idea what I should do in order to avoid such problems? Unfortunately the issue cannot be reconstructed locally. If issue still occurs - please, open ticket at support.dhtmlx.com and prvide a complete demo where the issue can be reconstructed.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00577.warc.gz
CC-MAIN-2023-06
735
7
https://yeahexp.com/heap-and-stack-view/
code
What does the heap and stack physically look like in RAM? - Both the stack and the heap are physically in RAM (we do not consider architectural dislocations using special processors / computers) - Their size and location are determined by the axis - In this case, the heap can be fragmented (sometimes quite strongly). Axes usually have special routines for defragmenting the heap. - The stack is usually never fragmented (you can probably think of fragmented stack implementations, but that's an oxymoron). - The stack is, as it were, faster because the only parameter it works with is the stack pointer (usually a register) – therefore, all operations with the stack work many times faster than with the heap. The POP/PUSHwrite operation from the stack is 1 body movement of the - It is more difficult with a heap precisely because of its fragmentation, and a simple operation of extracting a value from it can result in dozens (if not hundreds) of processor movements. - The disadvantages of the stack are that it is small in size (it is always an order of magnitude smaller than the heap) – and also that access to it is only sequential.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00112.warc.gz
CC-MAIN-2022-33
1,145
9
https://socratic.org/questions/how-do-you-use-the-binomial-series-to-expand-2x-y-9
code
How do you use the binomial series to expand #(2x – y)^9#? This is a lot of work! I have demonstrated the method. I will let you complete the process. Using Pascal's triangle: So for reference type The coefficients as shown in Pascal's triangle are derived by the binomial expansion: Thus for example: Using software I get the final solution of:
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526931.25/warc/CC-MAIN-20190721061720-20190721083720-00233.warc.gz
CC-MAIN-2019-30
347
7
https://encyclopedia2.thefreedictionary.com/star+density
code
I see no difference between detecting a dark nebula due to a marked absence of faint stars, and detecting a loose open cluster because of a statistically significant increase in star density at the cluster's plotted position. These galaxies are often too faint for images, so scientists use maps of star density Mussel recruitment and sea star density . - We observed that the abundances of juvenile Mytilus spp. The three levels of star density make this a worthy target that I've enjoyed on many nights. Hubble's blurred vision (SN: 7/7/90, p.4) can resolve only the very beginning of a rise in star density ; the density might level off farther into the core. It's a beautiful sight in a 4-inch reflector at 45x, and the star density holds up well enough to make it spectacular even in the largest of amateur instruments despite overflowing the field of view. Indeed, Lauer says, the actual star density may well exceed this estimate, which is based on Hubble's current optical images. The galaxy's characteristic spiral arms are zones of higher star density within the thin disk. But certain other, smaller parts of the sky have an even higher star density It shows a very large halo nearly 20' across with an abrupt increase in star density I have seen the patchiness in other telescopes and have never known whether I was observing nebulosity or variations in star density , or both. If we aim haphazardly at a location more than 20 [degrees] from the galactic equator, where the average naked-eye star density is only [Lambda] = 0.056 star per square degree, the computer program shows that the chance of not finding any naked-eye stars within the ring is 50 percent - just as likely as heads on a coin flip.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00500.warc.gz
CC-MAIN-2021-31
1,715
21
http://forum.xda-developers.com/showthread.php?t=2726607&goto=nextnewest
code
Does anyone know of any good alternatives to these programs. I am about fed up with both of them. For some reason, every so often, they reset in the middle of the current battery charge and they start collecting data from that point in time instead of the way it is set to which is Since Last Full Charge. I am at the point of dumping both of them but I can't seem to find anything else out there. Last edited by steelersmb; 8th May 2014 at 09:37 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00125-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
450
2
https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vbW92aWUtYnV0dHMv/episode/M2RhMjc1OWEtODkxMi00OTU0LWEyNjQtMWM1ODMxYzcyMDUx
code
This week we watch The Artist and Extremely Loud & Incredibly Close, Both of which were Best Picture nominated films from the 84th Academy Awards..... We try and understand why. Any Questions Email us at firstname.lastname@example.org This is a Murphy House Production: https://www.facebook.com/MURPHYHOUSEPRODUCTIONS/ Follow us on Twitter: https://twitter.com/moviebutts Our Website: https://movie-butts.captivate.fm/ IMDB Links for the films we watched:
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00096.warc.gz
CC-MAIN-2021-49
455
6
https://www.supremesys.com/jobs/
code
Do you consider yourself one of the best at what you do? Do you enjoy working from your own location? Are you always thinking out of the box? Do you love making things work better? If yes, then we’re looking for you. So, tell us what you are good at and attach your resume with examples of work.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814827.46/warc/CC-MAIN-20180223174348-20180223194348-00056.warc.gz
CC-MAIN-2018-09
297
2
http://linuxtoolkit.blogspot.com/2010/12/getting-yum-to-install-from-selected.html
code
# yum --disablerepo=* --enablerepo=dag install (packages)where dag and update are repositories. If you are not sure the repository names to use, go to /etc/yum.repos.d/xxx.repo. The first line [.....] is the repository names Monday, December 6, 2010 Getting Yum to install from selected repository only Working from Using Yum to local install without checking Repository, you can use the same principle to install from selected repository while disabling the rest.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00625-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
464
4
https://blog.localstack.cloud/2023-04-24-case-study-knowbe4/
code
“LocalStack has been a game changer in terms of development speed and efficiency for our team migrating our monolithic application to a serverless Node.js stack on AWS.” KCM GRC Platform by KnowBe4 is a governance, risk management, and compliance (GRC) software platform. Designed to help organizations move away from manual processes, KCM GRC efficiently manages risk and compliance by integrating with various third-party vendors. With KCM GRC, you can manage the complex area of compliance and audits, centralize policy distribution and tracking, and simplify the risk management process. KCM GRC was developed earlier as a single-monolith using PHP and was using AWS Fargate Container Service to host their monolith application. KnowBe4 started to re-engineer the application development using NodeJS and PostgreSQL while using various AWS services like Simple Notification Service (SNS), Simple Queue Service (SQS), S3, DynamoDB, API Gateway, and Terraform as the Infrastructure-as-Code framework. We spoke with Kevin Breton, VP of Engineering at KnowBe4, to learn more about how LocalStack has helped their engineers to improve their application migration and cloud adoption with LocalStack while empowering them with blazing-fast development and testing loops. The initial challenge for the engineers at KnowBe4 was understanding the new AWS technologies and how to use them best. Previously, the software they were working on was a PHP-based monolithic application, where developers could not employ a testing & mocking framework judiciously. With the migration to a cloud-native approach, the engineers wanted to use AWS services best. To solve this problem, Serverless Stack (SST), an open-source serverless application platform, was employed to help the engineers with an efficient development and testing loop. But soon, the wider use cases of the KnowBe4 team forced them to look at alternatives, and this is where Kevin got acquainted with LocalStack! The KnowBe4 team started using LocalStack Pro, which has more enhanced features and APIs over the free, open-source community-focused solution. After initial usage, the KnowBe4 team started adopting LocalStack as a solution to spin up various AWS services inside a single-running Docker container. Engineers started using LocalStack with Docker Desktop to build containerized applications and microservices. It led to an improvement in the development & testing lifecycles. The KnowBe4 team prefers to use our awslocal command-line interface (CLI) over Terraform. “While in the past we had to maintain our own tooling and local mocks, with LocalStack we can now empower our devs to iterate quickly without having to perform numerous code commits & waiting for AWS pipelines—hence also saving money on infrastructure!” Kevin engineered a serverless template that creates a custom serverless project with LocalStack and Terraform, including unit tests and all available integrations to simplify the project setup and development process. Lerna is used to bootstrap everything together to create multiple Lambda functions at will. It allows the engineering team to develop and test their serverless functions with Lambda locally and push it to GitLab CI, the CI provider used by KnowBe4, where a staging environment is created using real AWS API calls. The simplification in the engineering process has been a benchmark for the KnowBe4 team as they continue relying on LocalStack for their local cloud development needs. Within a few weeks, Kevin and his team members noticed the value LocalStack was bringing to their development and testing processes. LocalStack has simplified the creation and invocation process of AWS Lambda functions for KnowBe4 by nearly 90%. Previously, creating the Lambda functions on GitLab CI took around 7-10 minutes which is now created and tested locally in just a few seconds. It increases the reliability and efficiency of LocalStack as a credible local cloud development platform geared towards increasing developer productivity. With LocalStack, the KnowBe4 team capitalized on our Pro support plan, which aims to support our Pro & Enterprise user base. KnowBe4 adopted the support plan to simplify its integration of local development cycles with LocalStack. With LocalStack support handled by our team, Kevin’s team was able to focus more on taking their application to cloud-native rather than taking time to learn LocalStack. It also prompted them to discover flaky tests easier and faster, giving them a clear view of how to debug them and make the best use of LocalStack. Kevin is now introducing LocalStack to a new team which is core to all of KnowBe4’s AWS initiatives. The project would utilize an event-driven architecture currently written in Python, Ruby, Golang, and Rust - and will leverage a diverse set of services, including AppSync, API Gateway, Lambda, SQS, S3, SNS, and EventBridge. With LocalStack, the team can develop & test their cloud infrastructure as the team decouples their existing code logic and adds new services. With this project, the KnowBe4 team could integrate all their products into a single platform.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475311.93/warc/CC-MAIN-20240301125520-20240301155520-00113.warc.gz
CC-MAIN-2024-10
5,154
15
https://seths.blog/2013/06/the-lab-or-the-factory/
code
You work at one, or the other. At the lab, the pressure is to keep searching for a breakthrough, a new way to do things. And it's accepted that the cost of this insight is failure, finding out what doesn't work on your way to figuring out what does. The lab doesn't worry so much about exploiting all the value of what it produces–they're too busy working on the next thing. To work in the lab is to embrace the idea that what you're working on might not work. Not to merely tolerate this feeling, but to seek it out. The factory, on the other hand, prizes reliability and productivity. The factory wants no surprises, it wants what it did yesterday, but faster and cheaper. Some charities are labs, in search of the new thing, while others are factories, grinding out what's needed today. AT&T is a billing factory, in search of lower costs, while Bell Labs was the classic lab, in search of the insight that could change everything. Hard, really hard, to do both simultaneously. Anyone who says failure is not an option has also ruled out innovation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.2/warc/CC-MAIN-20231205140836-20231205170836-00325.warc.gz
CC-MAIN-2023-50
1,054
6
https://au.mathworks.com/matlabcentral/answers/892087-how-can-i-differentiate-an-abstract-symbolic-function
code
Hi, I would like to differentiate an abstract symbolic function in MATLAB. To make sure we're all on the same page, let's say I have a function of two variables where f is any function of one variable that is differentiable across its domain. Using partial differentiation, we should expect to see: I've tried to use syms to observe this behaviour in MATLAB: syms x y f(k) g(x,y) g(x,y) = f(x - 3*y); g_x = diff(g,x); g_y = diff(g,y); In effect, we find that Which translates to after using the latex function. But we also find that g_x = diff(f(x - 3*y), x) which translates to . I have tried to use the expand and simplify functions on this result, but they don't seem to transform it into the expected . Is there a way for me to do this? Note that if the coefficient of x was anything other than 1 or 0, then we have
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00445.warc.gz
CC-MAIN-2021-39
819
13
https://cms.jinya.dev/guide/theme/access-configuration.html
code
Accessing the theme configuration is rather simple. Every view gets passed an instance method to access configuration values. A simple code sample is shown below: <?= $this->config('group_name', 'config_name') ?> This small code snippet will return either a string or a boolean, based on the type you configured in your theme.php. If you access a configuration you have not set in your theme.php a PHP warning is output.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100527.35/warc/CC-MAIN-20231204083733-20231204113733-00113.warc.gz
CC-MAIN-2023-50
420
3
https://fmttmboro.com/index.php?threads/battle-of-the-goats-on-saturday-night.39882/
code
Strange that isn't it? Some teams (like England) get a 5 day break to R16 but others (like Argentina) only get a 3 day break Fair enough mate.But they're playing a team who also played today and they get more time off before a potential final. There's no massively fair way of scheduling matches when you're televising them all. We played our first match on the 21st, Brazil played theirs on the 24th. Could argue Brazil had an unfair advantage with more time to prepare for the tournament.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00438.warc.gz
CC-MAIN-2023-06
490
5
https://bornsql.ca/blog/configuration-manager-shortcut-windows-10-server-2016/
code
(Last updated 2020-01-07) This is more for my own reference than anything. On newer versions of Windows desktop and Windows Server, we may find that the shortcut to SQL Server Configuration Manager is missing. According to Microsoft Docs, the reason for this change is that Configuration Manager is a Management Console snap-in: Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not a stand-alone program, SQL Server Configuration Manager does not appear as an application in newer versions of Windows. I think this is ridiculous because it does not maintain backward compatibility. This is especially frustrating because the same article reminds us that all changes to SQL Server services should be managed through the Configuration Manager. The workaround is to create our own shortcut as follows: |SQL Server Version |Path for Shortcut |SQL Server 2012 |SQL Server 2014 |SQL Server 2016 |SQL Server 2017 |SQL Server 2019 Share your frustrations with the “modern” Windows UI with me in the comments below.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00303.warc.gz
CC-MAIN-2024-10
1,064
16
https://www.construct.net/forum/game-development/game-development-design-ideas-25/local-storage-dictionary-array-98988
code
I have seen a lot of questions about Local Storage and although I am relatively new to C2, I thought I would offer this capx to demonstrate the basic use of the Local Storage plug-in. It is a simple game where you have 3 students. You push their corresponding button to move them through the school system until they graduate. You can exit at any time, go back in, and see where you left off. A couple notes: - I also used a dictionary to show how the 2 can be used together. (I personally would also use some global variables for clarity, but thought this was leaner and maybe easier to follow. I was not trying to write perfect code.) - I added a "Clear" button so you can start over. - I also used an array to save grade information as I thought this might help some beginners as well. - Lastly, I added a text field to show last button pushed just to mix up the types of data stored in the dictionary and local storage. - The default preview is set to Chrome. Hope this helps some of you .
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864191.74/warc/CC-MAIN-20180621153153-20180621173153-00267.warc.gz
CC-MAIN-2018-26
993
7
https://www.daniweb.com/hardware-and-software/information-security/threads/519358/need-help-iphone-glitch
code
I've encountered an unknown user appear in my calls log and its had a 16mins airtime videocall to this person in messenger. I'm just suprised, it was appear in my call logs even though I didn't do anything Can anyone help me to track this code or identify who is owner of this account in facebook. I attached the screenshot for your reference and Here's the code that I saw in the contact information of this person. Btw, I'm using IPhone 7 , Idk if this is only a glitch or not. I hope someone on this group will help me. Your assistance is highly appreciated. Thank you DaniWeb and everyone,
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00194.warc.gz
CC-MAIN-2020-10
593
6
http://forums.wineloverspage.com/viewtopic.php?p=5815
code
We recognize that the search engine doesn't find the "TN" we have been using in our Tasting Note titles so we are suggesting that you use WTN (for Wine Tasting Note) in the title instead. (example - WTN: 2000 Ch. Whatever) We would like to go back and edit all previous titles so that any TN's are changed to WTN's. Bill B has volunteered to go back through the WLDG archives here and make the changes on the titles of any TN's that we have posted already. I'll be granting him temporary editing authority for this task. In a day or two we will start updating titles. We will NOT touch the content of the note, only the title. You are also welcome to go back and change your own titles to save us some time. If you don't want the title changed in one or any of your posts, we certainly won't change it. All you have to do is post here that you don't want your titles changed. Last edited by Robin Garr on Wed May 03, 2006 4:59 pm, edited 1 time in total.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00029-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
954
6
http://www.blackberryforums.com/bes-admin-corner/79791-cant-assign-device-user.html
code
06-05-2007, 08:25 PM Join Date: May 2007 Post Thanks: 0 Thanked 0 Times in 0 Posts | | Can't assign device to user Please Login to Remove! I installed BES express for exchange 2007. When I assign device to user. There is a error message(attachment). It seems I should wait several minutes for a newly added user. But I add the user two days ago. It must something wrong. What else should I do. Thanks. Last edited by iamsyu : 06-05-2007 at 08:27 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00352.warc.gz
CC-MAIN-2018-26
449
8
https://community.adobe.com/t5/robohelp-discussions/robohelp-project-saved-to-shared-network-drive/td-p/10617276
code
I was wondering if anyone can provide a list of reasons why a robohelp project should not be worked on while saved to a network drive. I can't seem to find anything in the robohelp documentation that states why this shouldn't be done. Who said it cannot be worked on while saved to a network drive? That was the case many years ago but that changed several versions back. The only limitation that Adobe cautioned about was the speed of the network and that is outside their control. You haven't said what version you are using but unless you are using a very old version, you need to try it. Until you are satisfied your network is OK, then keep regular backups. Hello Peter, Thank you for your quick response. I am using Robohelp 2017. The biggest problem we tend to have is latency (which is really, really slow). I guess some of the posts I have read were probably outdated. One other issue that I have come across which I discussed in this post, is that if I import an image into robohelp, it will place the image into the top-level project folder instead of the folder of the topic I insert the image into. I was wondering if there might be any other odd behavior I should be aware of while working on a project in a shared network drive? Thank you. I appreciate your help. The location of the project typically has nothing whatsoever to do with where images are stored when you add them. They have always been added at the root level of the project. However, I think this may have changed in the all new 2019 release. I believe in that release images are now in a special folder named assets. You'd have to work with your network engineers for any latency issues. It's not something Robohelp can affect, and I don't think any of us are network engineers. 🙂
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00368.warc.gz
CC-MAIN-2023-50
1,765
8
https://woodgears.ca/tech/pi_holder.html
code
Raspberry Pi and camera module holder I set up several wildlife cameras, and an IP camera connected to the internet to track the coming and going at my rural property. But the passive infrared detector on the wildlife cameras don't work very well in the winter cold, and the IP (internet) camera's motion detection isn't very good. A problem with the Raspberry Pi, is that it's so much lighter than the cables that connect to it. It's all too easy to pull it off the table by accident. I cobbled together a simple stand to hold the computer, cables and camera. Hacking around with the Raspberry Pi was so much fun that I decided to get a second one. So now I needed another stand. This time, I documented the construction. You can also see a black cable off to the left in this photo. I didn't have a micro USB cable handy, so I soldered a cut-off USB cable to the power pins instead. I made some wooden spacers to go under the board to allow for room for the components on the bottom and the cable. I figure that way, if I trip over the cables, the connectors won't get yanked to the side, so it shouldn't damage the connectors. The block next to the HDMI connector protects it against getting yanked off to the side. There are two holes in the block to allow access to the micro-USB connector and 3 mm audio connector. I won't be using the power connector on this one because I already have a power cable soldered to the board. Raspberry Pi camera module holder (V1 and V2 modules) This started as a block of wood. I drilled pilot holes for the screws, then cut a notch along the length, and one across it on the bandsaw. The camera module itself is a 5-megapixel cellphone camera module. Very very small, with a very tiny lens. Surprisingly acceptable photo quality, all things considered. Better than the wildlife cameras, and better than webcams. I also made another small bracket to help support the long antenna on the USB wifi adapter. The adapter's connector is all plastic so I could easily snap off, especially with a big antenna hanging off it. It has to be relatively far back so the lower USB connector can still be used. I also made a top cover for one of my Raspberry Pi holders, with a camera mount at an angle. This is the one I mounted to the garage on my big garage shop in the country I bought both of my Raspberry Pis before the Pi 2 came out. But for what I'm doing the slower Pi 1 (model B+) is fast enough. I can only get about three still frames per second out of the camera module, and analyzing those takes under 20% CPU utilization. My imgcomp program (motion triggered time-lapses) To my Woodworking website.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00182.warc.gz
CC-MAIN-2021-17
2,638
16
https://www.gisarea.com/profile/6123-artur_indio/
code
Ow gisadept thank you for you help. Well, the files that i want are the hidrological, this files still are not converted to shape and the tools that exist in the webpage are just to convert .XML files into .SHP. This webpage have the original files used by japan government there are the all files of hidrologic data, but these files are in .TXT. I think that i'm don't getting convert the .TXT files in global mapper because the japan font. thanks in adv.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00007.warc.gz
CC-MAIN-2020-24
456
3
https://community.articulate.com/discussions/review-360/review360-mobile-access-viewing-and-commenting
code
Review360 mobile access - viewing and commenting Mar 18, 2022 Can reviewers add comments to a course in Review360 from a mobile device? Obviously, it's a better experience using a computer but some of my reviewers are on the go quite a bit and would like to add their comments when they are not by their laptop. Additionally, I'd like to know if reviewers can view the course in a mobile device format (portrait and landscape mode) like you can during development. Since our courses are for Flight Attendants, they will primarily be viewed on a mobile device so having the ability to review the course in that format would be extremely helpful. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645417.33/warc/CC-MAIN-20230530063958-20230530093958-00425.warc.gz
CC-MAIN-2023-23
652
4
https://www.codeisgo.com/post/using-kong-api-gateway-with-event-driven-system-to-modernize-legacy-integrations-2022050501/
code
Let’s talk API gateways and event based integration a bit. Amazon API gateway has been a pillar of serverless applications on AWS, it allows developers to manage API endpoints backed by Lambda functions or potentially other services. By Sebastien Goasguen. Building REST APIs with serverless functions has truly empowered developers to deliver products faster in the Cloud. For enterprises with significant on-premises systems, there is no AWS API Gateway, but you have the Kong Gateway which allows you to do similar things. In this post, we are going to go one step beyond and show you how you can use the Kong Gateway to expose a REST API in front of an event driven integration. In other terms, front your asynchronous event flow with an API. The article also pays attention to: - First, a REST Endpoint - Second, an IBM MQ Connector in Kubernetes - Finally, Add a Synchronizer and Transformations Moving to the Cloud does not mean throwing away decades of enterprise efforts, performance optimization, workflows and system of records. You do not need to lift and shift everything at once. What you can definitely do is modernize your approach to software development and start bringing in new technologies to your entire software and infrastructure stack. Nice one![Read More]
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00631.warc.gz
CC-MAIN-2022-21
1,283
7
http://gmc.yoyogames.com/index.php?showtopic=308201
code
I need help with creating exp, hp, mp and levels. And which GM do I need to make it. I also need to know how to make it when i level you get a spell and be able to add stats like str, int, dex, etc...and an invotory that can fit the picture that i want and when i hit a button it will apear and disapear please help (im biggest noob ever) p.s. is there a way to do a char select system? like being able to select a char and play? ty If someone is willing to help me with my problems Create exp, hp, mp and all other stats by setting global variables or the global arguments to the value of the hp, mp, and exp, etc. To make the levels, just make separate rooms. To get a spell or stats when you go to another level, just code the room creation codes for the rooms of the levels. Inventory... I'm not so sure its easy to put in without bugs. There is a way of making a character select system. Create a room and put objects of the different characters. When the player clicks on one, make the player select it. DONE! VOILA! If this is helpful, remember Coollog Inc. and give credit to Coollog Inc. in your game. PLEASE GIVE CREDIT
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697745221/warc/CC-MAIN-20130516094905-00047-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,129
5
https://forum.manjaro.org/t/testing-update-2023-04-25-kernels-virtualbox-lxqt-1-3-0-systemd-253-3-kde-gear-23-04-firefox-pipewire/139323
code
Another testing branch update with some usual package updates for you. get a whopping 82% off + 2 months free on your subscription with Surfshark VPN - Manjaro, like many other open-source projects, relies on the generosity of its community through donations and corporate sponsorships to support its growth and development. These donations are essential in covering the various expenses incurred in the operations of the project such as server costs, software development tools, infrastructure expenses, training, flying people to events or conferences and the salaries of key developers. With the help of these donations, Manjaro is able to secure the necessary financial stability that allows the project to continuously improve and remain active. If you love Manjaro, consider to donate! - As you might have seen some of our team were able to attend FOSDEM 2023 and the conference proved to be incredibly productive for us. See our blog post for more. Finding information easier about Manjaro Finding information easier about Manjaro always has been a topic that needed to be solved. With our new search we have put all Manjaro data accessible in one place and divided by sections so it makes it easier to digest: New Manjaro search engine is available | Blog - Arch Linux and Manjaro on TUXEDO computers - Arch Linux and Manjaro on TUXEDO computers - TUXEDO Computers - Linux, Judo, unicycles and … Baywatch?! How Vivaldi and Manjaro aim above the ordinary. | Vivaldi Browser - Framework | Spotlight on Manjaro Linux - Protect your personal data, keep yourself safe (unlimited devices): 82% off + 2 mo. FREE Notable Package Updates: - All our Kernels got updated - we pushed some of them to all branches - Real-Time kernels 5.15 and 6.0 EOLed. For now 6.2 series is supported - Virtualbox is at 7.0.8 - LxQt got updated to 1.3.0 - We adopted Systemd 253 series and updated to 253.3 - Calamares 3.2.62 includes translation updates and all the fixes we did in 3.2.61 rebuilds - KDE Gear 23.04.0 introduces some Plasma-Mobile apps to the regular application suite - Firefox is at 112.0.1 - pipewire we updated to 0.3.70 and pushed to all branches - More updates to XFCE - Usual KDE-git, Python and Haskell updates - We continued to do our Spring-Cleaning Info about AUR packages AUR (Arch User Repository) packages are neither supported by Arch nor Manjaro. Posts about them in Announcement topics are off-topic and will be flagged, moved or removed without warning. Info about GNOME 43 GNOME 43 is here! New in Manjaro GNOME: We now have Gradience in the Manjaro community repo (also available as a Flatpak) for your theme customization pleasure. There are community presets (currently unavailable) available or you can create your own. Our in-house Layouts Switcher application by @Chrysostomus has some new features as well as various improvements and fixes: - NEW Dynamic Wallpaper button: Create your own dynamic wallpaper - NEW Appearance button to open Gradience for theme customization - NEW The Firefox GNOME theme option automagically updates to the latest upstream version when toggling the radio button off and on again. - FIXED: Layout preview images now match KNOWN ISSUE: Missing required packages will not be automatically installed via Pamac. Please check optional dependencies of Scaling settings for Xorg in GNOME Control Center are back! [HowTo] Gnome Xorg Fractional (per monitor) Scaling Lonely leftover orphan packages that have been removed from the Arch / Manjaro repositories: (Not compatible with Nautilus 43) (Not compatible with Nautilus 43 & EOL) Get our latest daily developer images now from Github: Plasma, GNOME, XFCE. You can get the latest stable releases of Manjaro from CDN77. Our current supported kernels - linux419 4.19.281 - linux54 5.4.241 - linux510 5.10.178 - linux515 5.15.108 - linux61 6.1.25 - linux62 6.2.12 - linux63 6.3.0 - linux62-rt 6.2.0_rt3 Package Changes (Fri Apr 14 03:49:07 CEST 2023) - testing community x86_64: 922 new and 939 removed package(s) - testing core x86_64: 23 new and 22 removed package(s) - testing extra x86_64: 581 new and 576 removed package(s) - testing kde-unstable x86_64: 374 new and 374 removed package(s) - testing multilib x86_64: 24 new and 28 removed package(s) A detailed list of all package changes can be found here - No issue, everything went smoothly - Yes there was an issue. I was able to resolve it myself.(Please post your solution) - Yes i am currently experiencing an issue due to the update. (Please post about it) Check if your mirror has already synced:
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648635.78/warc/CC-MAIN-20230602104352-20230602134352-00480.warc.gz
CC-MAIN-2023-23
4,557
61
https://askleo.com/can_i_install_windows_xp_over_my_wireless_connection/
code
notebook connection? Here is my scenario. The CD-ROM on my notebook doesn’t work. I am thinking of sharing the CD-ROM on my desktop, mapping the installation to the CD-ROM on the desktop and then running it from the laptop over a wireless network card (11 mbps) connection. Will this work? Will XP load the wireless driver upon reboot automatically? It certainly should work, though not exactly as you expect. I do things a slightly different way, that happens to solve this same Become a Patron of Ask Leo! and go ad-free! Like I said, I believe your approach will work. Windows Setup actually copies over all the files it needs before its first reboot. I don’t believe it will ask for any more files off of the CD-ROM thereafter, (but I could be wrong). In any case, I don’t believe that the wireless network will work until much later in the setup process. My approach is a little different, though. And I actually do this for almost every Windows XP install I make, regardless of whether or not the machine has a working CD-ROM drive. Before even running setup, I copy the entire “I386” directory tree from the CD-ROM to a new subdirectory on the hard disk of the machine I’m setting up. I usually use C:\I386. The I386 directory on the distribution CD-ROM contains all the Windows XP setup files. Now, even though that includes lots of files I don’t need (like drivers for hardware I don’t have, for example), the amount of space that takes up is small compared to today’s hard disk capacities. Then, after the files have been copied to my hard disk, I run setup.exe, or winnt.exe, from my hard disk’s copy of the setup files in C:\I386. All the files needed are there, and setup never needs the CD-ROM again. That last point is worth repeating: setup never needs the CD-ROM again. Not just for the setup process, but after that too. Some weeks or months later, when you add hardware to your machine, Windows may need files from the Windows Setup CD-ROM. If you’ve copied them to your hard drive, as I’ve just described, Windows will remember to get them there instead of asking you to insert the CD-ROM. That’s particularly nice for laptops – if you happen to be away from home or the office at the time, and wouldn’t have a Windows CD-ROM to There is one “catch” (isn’t there always?) – you can’t use this technique if you want to have the setup process format your hard drive. It would format and erase, all the files you so carefully copied over. But aside from that, it’s a nice way to streamline the setup process. A final caveat: don’t lose that Windows CD-ROM. Keep it somewhere safe. If your hard disk ever dies, for example, you’ll need it then to reinstall Windows, one way or another, to your repaired or replaced 7 comments on “Can I install Windows XP over my wireless connection?” what kinda stupidity is this?? I’ve read 3 different articles that I *thought* could solve my problem and you keep going round in circles without ever saying anything worth listening to… I think what that guy is actually saying is he wants to format C drive and he does NOT have a working cdrom, therefore cannot boot from a cd! And yes, you kinda covered that part by saying “you cannot do this if you really want to format your HD” but then again, you never really answered that either! Me, like most people with some sense little sense left, realize that “installing windows” does NOT mean “adding” “upgrading” “installing new hardware” or “repairing” but rather format a HD and INSTALLING WINDOWS. Anyway, to answer the question that you did not bother to: no, you cannot format your HD like that because the network is kept by the OS itself. You can however create a FAT partition in one of your drives (using partition magic for example), copying the windows installation files (through the network) to that new partition, create a bootable floppy (windows 98 is a very good example), boot your computer with the floppy, format c: from dos, access your partition from dos as well and install windows from your HD to your HD (only a different partition. Just make sure the partition is FAT otherwise dos will not recognize it. Hi, I really like this solution – I’ve lost count of the times I’ve been 500 miles from the cd rom when I needed it. However, how much extra space are we talking here? Plus, is there any auto-play stuff to get around? Don’t know about a wireless connection, but if you have a floppy drive and a wired connection, you should be able to boot and do it that way, but remember you have to have the proper drivers installed for either wired or wireless. Is there any way to remove the laptop HDD, hook it up to your desktop via an adapter, and then use the existing OS on the desktop to format the laptop HDD and copy the installation files to the laptop hard drive? It would be a MUCH faster transfer rate than 11Mbps, and once they were in there reinstall the HDD to the laptop and run setup from there. The HDDs on laptops can be removed with just a couple screws on the bottom in most cases, and the laptop IDE to desktop adapters are cheap on ebay as well as simple to install. In your situation that would be the method I would use. Good luck! Is it not possible to copy the I386 directory to a 2nd partition on the HDD, run setup within windows, and then format the main partition of the HDD? Thinking it through in my head that would work an absolute charm!? Or am i wrong?! —–BEGIN PGP SIGNED MESSAGE—– You can try it, it may work, I’m not sure —–BEGIN PGP SIGNATURE—– Version: GnuPG v1.4.7 (MingW32) —–END PGP SIGNATURE—– You guys are very smart, but you forget a simple thing, that if it is a laptop how tha hell we put a bootable floopy, cause laptops dont have flopy drives, i have same problem, i have a Insys laptop, from my mom, she did something to the computer and it have lots of software errors, and it burned 1 memory of 2GB and the wirelless adaptor, so now i need to instal the windows there with no cd, and i was already thinking about do it that way, with a second partition and all filles there, but i dont find anywere the autobat or de dos files from de boot flloppy to put in the second partition to start from there, can anyone help me and give me a link with the filles from the bootable floppy? thks
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00779.warc.gz
CC-MAIN-2023-14
6,371
54
http://preview.pyvideo.org/north-bay-python-2017/colossal-cave-adventure-in-python.html
code
The oldest known Colossal Cave source code we have is from 1977 and written for the then 11-year-old PDP-10 in a variant of FORTRAN IV that is now completely defunct and was specific to the PDP-10. We'll talk briefly about a history of video games, FORTRAN, and the PDP machines that led to this, and explain how the quirks of this machine were baked into its variant of FORTRAN IV. We'll look at some ancient PDP manuals to discover the secrets you need to know to read this amazing source code. We'll delve into how one goes about writing a simple interpreter in Python to run this FORTRAN IV code as-is: translating strings, implementing its odd arithmetic and conditional statements, reading in data from our "tape" drive, how we get input and send output using our "teletype". If you want a demo of this game playing right now, it is running on Heroku and accessible via SMS. Just text anything to +1 (669) 238-3683 ("669 ADVENT3") to start a game. Send RESET to restart the game. Case doesn't matter when sending commands. It identifies you based on your phone number and will remember your game indefinitely. (We'll also talk about how we achieved this in Heroku.)
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00562.warc.gz
CC-MAIN-2022-40
1,171
4
http://02.market/how.html
code
How To use the price comparison search. Well, it's very basic same as you would use google, but with one difference, you need to know the item you are looking for to benefit from the platform's power. for example, if I search "iPhone 12," I will get results, but if I search for "iPhone 12 250GB blue” then, I will get results to compare prices for the specific item and other stores selling this item's make and model. Take it for a spin. It's FREE, and it's addictive :) Type your product in the search bar and let the magic happen.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00805.warc.gz
CC-MAIN-2023-50
536
7
https://gamerbee.net/master-embedded-linux-programming-2nd-edition-ebook-for-only-1/
code
Today’s featured offer comes from us. Online courses Section of Niuven Deals Storefor where A limited time You can Pay what you want. for the The Complete Linux eBook Bundle (Mini). Pay What You Want for Unlocked eBooks: Mastering Embedded Linux Programming, Second Edition Master the techniques needed to build great, efficient embedded devices on Linux. Embedded Linux runs many of the devices we use every day, from smart TVs to Wi-Fi routers, test equipment to industrial controllers – all of which have Linux at their heart. Linux is a core technology in implementing the interconnected world of the Internet of Things. The comprehensive guide shows you the technologies and techniques you need to build Linux into embedded systems. You’ll begin by learning about the fundamental elements that underpin all embedded Linux projects: the toolchain, bootloader, kernel, and root filesystem. You’ll see how to build each of these elements from scratch, and how to automate the process using Buildroot and the Yocto project. - Access 478 pages and 14 hours of content 24/7. - Evaluate board support packages offered by most system manufacturers on chip or embedded modules - Use Buildroot and the Yocto project to build embedded Linux systems quickly and efficiently. - Update IoT devices in the field without compromising security. - Reduce the power budget of devices to make batteries last longer. - Communicate with hardware without writing kernel device drivers. - Debug devices remotely using GDB. - Find out how to configure Linux as a real-time operating system. Here’s the deal: - Represents the total retail value of the bundle. $40. - The bundle has to be unlocked. Only $1 at the time of writing. More New Deal. We post this because we earn a commission on every sale so don’t rely solely on ads, which many of our readers block. All of this helps pay for staff reporters, servers and hosting costs. Other ways to support Nuveen The above contract isn’t doing it for you, but still want to help? Check out the links below. Disclosure: An account at New Deal Required to participate in any deals powered by our affiliate, StackCommerce. For a complete description of StackCommerce’s privacy guidelines, go here. Nuveen benefits from the shared revenue of every sale made by us. Branded Deals Site.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00721.warc.gz
CC-MAIN-2023-50
2,324
21
https://android.stackexchange.com/questions/1909/how-to-disable-startup-and-shutdown-sound-on-samsung-galaxy-s/1912#1912
code
Is there a way to disable the sounds on phone startup and shutdown? When I had JF3 firmware, there were no sounds, but now on JM1 the sounds are there :( Try Silent Boot from the android market. It automatically mutes your phone when you shutdown. In this post at Android Central, the suggestion is: - Open a root explorer, go to This requires root. This seems promising, but in my HTC E9 phone (Android 5) these files don't exist :(
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00506.warc.gz
CC-MAIN-2022-33
433
6
https://www.enotes.com/homework-help/differences-401465
code
Differences What is the difference between a tortoise and a turtle? How can you tell? According to my resources, the difference is not a scientific one, but is used in common speech to express habitat differences, and which is which depends on where you are in the world- Australia, for instance, uses the terms differently than the US. In the US, tortoises are the members of the chelonia that live on land, turtles live in the water, and terrapins do both - they live along shorelines and spend time both on shore and in the water. In Australia, only salt water turtles are called "turtles", and everything else is called a tortoise. So a sea turtle there is a turtle, but a snapping turtle would be a tortoise down under. A tortoise is a very specific species of reptile that is a land dweller. Some tortoises get to be so big that a small child can ride on them without harm to the reptile. Turtles live in or around water though they breathe air and lay their eggs on land. A turtle can not survive in barren areas, such as the Karoo Desert, where some tortoises prosper because, even though they go about on land, they must be in or near water, whereas tortoises are not restricted to the proximity of water any more than other land animals are. The major difference to me is that turtles are aquatic and tortoises are terrestrial. Turtles can come on land, but they live most of their lives in the water. Tortoises live their lives primarily on land. One physical difference that stems from this has to do with their feet. Tortoises have round feet while turtles have webbed feet.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890582.77/warc/CC-MAIN-20180121120038-20180121140038-00607.warc.gz
CC-MAIN-2018-05
1,587
5
https://exceptionshub.com/how-to-create-our-own-pdf-viewer-for-android.html
code
I want to build a PDF reader/viewer that could be used in my Android application, but I can’t use Google docs to read my content. I can’t use any PDF reader already installed in my device. It should be within my app and do not expose my secure content over the Internet. What could I possibly use? Do I have to use the Android native dev kit to create my own viewer? I’d recommend considering MuPDF which has already been ported for use on Android several times without reliance on Java. MuPDF is optimized for lightweight on-screen PDF rendering, making it perfect for mobile use. Please note that MuPDF and all the derived projects are not suitable for the commercial use and you should consider alternatives if you are not developing an open source GPL project. You will need pdf parsing libraries in JAVA… Parse the document, and display content in android. Below are some useful links : There is an api in java, not sure if it is supported in android, but if possible. You can use iText Api to read and write pdf documents in your application, and it does not require a pdf viewer is installed on the device. You have to integrate this in your app there is no other direct way for do that. I think you can use iText library to read the PDF in android. Here are the few links for that
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00239.warc.gz
CC-MAIN-2021-17
1,296
10
http://geeshin.blogspot.com/2009/05/warlord.html
code
So Asa suggested that I put some "bling" around the area of focus. Hopefully this is enough, or not too much. Did really short research on some medieval ornate designs on the armor, and couldn't really find anything I liked right off the bat. So yeah, made it up myself based on some things I've found. Anyway, constructive criticism is always welcome. Heck, just leave a comment if you want. EDIT: Re-posed the arm and tweaked some things here and there. I'm not sure what to do with him to add "character." I may have to redo his pose make him convey that sense of character that I want for him.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00632.warc.gz
CC-MAIN-2018-51
597
3
https://faq.gigatribe.com/author/gigatribe-admin/page/5/
code
You can install GigaTribe on as many PCs as you wish, but one user cannot be connected to more than one computer at a time. Free users use the same software as Ultimate users. The username determines whether the software … Plus If you format your hard drive or use another PC, you just have to download the GigaTribe software and then connect using your user name. This will automatically enable you to use the Ultimate version of the software. The Ultimate … Plus You just have to know the username of a friend to offer them the Ultimate licence. If you are purchasing the Ultimate licence by credit card. Start by choosing the username that will upgrade to Ultimate. As soon as payment … Plus To exchange files with your contacts, GigaTribe uses either Direct connection or VPN connection. You may select your connection mode in the Options menu, under the network tab: Select the Network tab, then check/uncheck the checkbox “Always check that the … Plus GigaTribe will run in direct connection mode only if your computer can receive incoming TCP connections from the Internet. Incoming connections can be blocked at the following levels: You must modify your router settings so that your router accepts To use GigaTribe in direct connection mode with two computers of the same local network, you must create two separate users. Each computers has its own local IP address and will use a distinct TCP port. PC1: local … Plus If you can’t connect after a GigaTribe update. GigaTribe has been blocked by a firewall or a router that authorized the previous version. You may briefly disable your firewall or your router to identify which one is responsible and then … Plus
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474617.27/warc/CC-MAIN-20240225135334-20240225165334-00497.warc.gz
CC-MAIN-2024-10
1,687
20
http://www.dbforums.com/showthread.php?997993-live-art-for-the-Internet-using-mySQL
code
I'm currently trying to create a piece of Live art for the internet by grabbing numerical values from an online database file, and inserting them into a shockwave application (written in Director MX), which will then create a graphical representation of the values. I'm trying to figure out how to dip into an online database file & grab the values I need. I've chosen mySQL as it is cross platform & open source, but I am happy to use any online database format that I can freely dip into More specifically, I want to find an online database file (with a static address so I can always get t it e.g. http://www.undeadarmy.org/video.htm) which is updated at least every five minutes, and where each numerical value is assigned an identifier of some kind (e.g. a letter) so that my program can find its values. Ideally the values would always be within a specific range so that I could be sure my Graphical representation would be reliable. Does anyone know of any good FAQs so I can teach myself how to grab data in this way? General advice & suggestions from experienced users would be greatly appreciated
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00048-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,106
4
https://es.gta5-mods.com/users/XAvioniX
code
Selecciona una de las siguientes categorías para empezar a explorar los últimos mods para GTA 5 en PC: I also have no entrance from the street Dont work for me. GTA crashwhen I load story mod. I've just reinstall GTA, last version of Scripthook, scripthook.net , asi Loader, and OpenIV. I've placed the 918 folder in the dlc_patch, and modified the dlclist as well as the extratitleupdatedata and GTA crash during story mod loading.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00368.warc.gz
CC-MAIN-2019-18
434
3
https://support.soluto.com/entries/20261118-what-s-the-difference-between-pause-and-delay
code
What's the difference between "Remove from boot" (Pause) and "Delay"? posted this on July 10, 2011, 7:00 PM The Background Apps feature in Soluto allows you to "Remove from boot (Pause)" or "Delay" applications from the Windows boot, helping the PC to start quicker. To use this feature, choose the PC you’d like to manage and then select “Background Apps.": "Remove from boot" (or "Pause" as it was previously called) means that the application will not run at boot, but at your command. For example, if you choose to pause Windows Live Messenger, you will need to start the application manually when you want to chat with your friends. "Delay" means that the application will not run at boot, but instead Soluto will launch it a few minutes after the boot is complete, when the PC is idle. For example, if you choose to delay Windows Live Messenger, the application will not start at boot, and you will appear offline to your friends until Soluto automatically launches it a few minutes later.
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989142.82/warc/CC-MAIN-20150728002309-00274-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
999
5
https://forum.virtualmin.com/t/website-not-resolving-correcting-after-root-password-change/47314
code
I am running multiple virtual host servers on one IP. I recently had to change the main password for virtualmin root. The site I have created after the password does not want to resolve to the correct address (keeps resolving to my company) I have checked all my setting in virtualmin and DNS manager and everything is correct. I have SSH and the settings in my config file are correct as well. This is the first site I have created since the password change. Have I missed something?? Changing the root password wouldn’t have any bearing on Apache and resolving websites… there’s likely something else that changed previously that’s causing the issue you’re having. There’s some thoughts here on how to troubleshoot that issue in the article titled “The wrong website shows up”: Thanks for your reply. I had already looked at that document and checked all the ip addresses everything in the conf file looks correct and matches the other sites I have. What distro/version is it that you have there? I agree with Eric that it is very unlikely a root password change has anything to do with website resolution. Just to make sure: Are we talking about DNS resolution to IP addresses, or about Apache selecting the wrong vhost to serve? I’m asking because the latter you don’t usually call “resolving”. That’s what you call DNS lookups.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00318.warc.gz
CC-MAIN-2022-40
1,358
10
https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Card.CardCaptionCustomDrawEventHandler
code
Represents a method that will handle the CardView.CustomDrawCardCaption event. NuGet Package: DevExpress.Win.Grid public delegate void CardCaptionCustomDrawEventHandler( object sender, CardCaptionCustomDrawEventArgs e ); Public Delegate Sub CardCaptionCustomDrawEventHandler( sender As Object, e As CardCaptionCustomDrawEventArgs ) The event source. Identifies the CardView object that raised the event. A CardCaptionCustomDrawEventArgs object that contains the event data. When creating a CardCaptionCustomDrawEventHandler delegate, you identify the method that will handle the corresponding event. To associate an event with your event handler, add a delegate instance to this event. The event handler is called whenever the event occurs unless you remove the delegate. For more information on event handler delegates, see Events and Delegates in MSDN.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649986.95/warc/CC-MAIN-20230604125132-20230604155132-00236.warc.gz
CC-MAIN-2023-23
854
7
https://chat.osquery.io/t/3540/hey-everyone-i-was-wondering-if-anyone-had-any-experience-de
code
Hey everyone, I was wondering if anyone had any experience developing with osquery. I would like to write a security tool, but it seems like osquery is used to just ship logs to some aggregator then you can move logs to something like kibana or the cloud but analyzing them and sending an action or something back down to the original binary would be extremely time consuming. Could anyone confirm my suspicions or point me in the right direction? It also seems like there is not a way to run osqueryd and get the response locally? most of the documentation just shows it being moving to a log aggregator. 10/26/2019, 2:11 PM My understanding is this is possible. Are you trying to make a stand alone EPP? Commercial EPP either so send data for analysis and mitigation, or they create a fat local client, I believe. What is it you are trying to do @Dustin M 10/28/2019, 2:43 PM I am open to both options, Stand Alone EPP would be the goal. How are they sending the data? The examples I have been looking at seem to not work in the repo. Ideally we would have smaller rules on a local client but most of the data would be sent to the cloud for analysis and mitigation
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817002.2/warc/CC-MAIN-20240415142720-20240415172720-00848.warc.gz
CC-MAIN-2024-18
1,166
5
https://build.opensuse.org/package/show/openSUSE%3ABackports%3ASLE-15-SP3/gnome-todo
code
GNOME To Do is a small application to manage your personal tasks. It uses GNOME technologies, and so it has complete integration with the GNOME desktop environment. osc -A https://api.opensuse.org checkout openSUSE:Backports:SLE-15-SP3/gnome-todo && cd $_ Embed a build result badge whereever you need it. Select from the options below and copy the result over to your README or on your website, and enjoy it refresh automatically whenever the build result changes. The Open Build Service is an
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00285.warc.gz
CC-MAIN-2024-18
494
9
https://brainmass.com/computer-science/algorithms/bit-setting-in-the-control-field-of-a-hdlc-information-frame-104479
code
A transmission system is communicating using an HDLC Frame format. Furthermore it is using the Asynchronous Balanced Mode (ABM) configuration. At the moment the receiver is sending an Information Frame to the transmitter. This is the 6th sequential message it is sending and it is acknowledging receipt of the 6th sequential message it received from the transmitter. The Poll/Final bit is set to 0. What are the bit settings in the message's control field? Assuming the standard format (8 bits long) control field. Since the message exchanged is an information frame, frame type is 1 bit long and has value 0. Also given that the Poll/Final bit is set to 0. Since we number frames starting ... Solution considers both control field formats - standard (8 bits long) and extended (16 bits long).
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00116-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
793
8
https://www.c4dhi.org/news/welcome-to-our-new-research-assistant-david-cleres/
code
Welcome to our new research assistant David Cleres We are happy to welcome David Cleres as our new Ph.D. Candidate and doctoral student at the Center for Digital Health Interventions. David holds a master’s degree in Computational Science and Engineering from EPFL and a bachelor’s degree in Bioengineering from EPFL. Before joining our team, David had the opportunity to develop a broad set of skills in the field of Computer Vision through his Master Thesis conducted at UC Berkeley and his previous Data Engineer position at an Eye-Tracking specialized Start-up. Within his Ph.D. he will work on a digital biomarker for Chronic obstructive pulmonary disease (COPD).
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00462.warc.gz
CC-MAIN-2024-10
672
2
https://luxequality.com/case-studies/fashion-marketplace-platform/
code
Manual Testing for an Online Fashion Marketplace Platform Aug 2022 - Sept 2022 It is an online marketplace platform that provides a unique shopping experience for fashion enthusiasts worldwide. Their platform helps users discover styles that enhance their confidence and allow them to express themselves authentically. The project was introduced to the market without undergoing a comprehensive testing process, and all testing was performed by the developer's team and product owners. CHALLENGES AND SOLUTIONS We were called to thoroughly test the newly developed functionality and conduct integration tests, E2E tests. Our testing process introduced advanced test design techniques, which allowed us to identify previously undiscovered bugs. Let us share with you some aspects of the work on this project. To identify and log bugs both in existing and newly created functionality More than 50 bug reports were created. To identify and log bugs both in existing and newly created functionality. Monitoring the bug's status and communicating with developers for updates and clarifications To make sure that the application works consistently and all required browsers and platforms BrowserStack was used as a testing tool to facilitate cross-platform and cross-browser testing, ensuring the application's compatibility across mobile and web platforms. It allowed developers and testers to assess the application's performance on various browsers and platforms, aligning it with the client's requirements for functionality and compatibility FEATURES OF THE PROJECT The application was initially developed for local use and was available only in Dutch. Therefore, our testers meticulously scrutinized the text and flow for clarity and comprehensibility, utilizing translation services where necessary to provide accurate translation into English. TECHNOLOGIES, TOOLS, APPROACHES Our team conducted manual testing only. While the automated testing was out of scope in this project, we would like to tell you about the technologies directly related to the testing process. - BrowserStack: Cross-browser and cross-platform testing tool used to ensure compatibility and consistent performance across different browsers and devices. - Jira: Project management and issue tracking tool for efficient task management and collaboration. - Notion: The connected workspace for closer collaboration with the dev team. - MongoDB Compass: A tool for conducting database testing within the project. - Effectiveness of the testing process: More than 50 bug reports were created. They contained detailed descriptions of identified defects, including their nature, severity, and potential impact on the web application's functionality. - Improved application performance: The user can now quickly switch between functionalities. All the integrated services are optimized, ensuring a seamless and continuous user flow. This optimization resulted in a smooth user experience. - Cost Savings: Identifying and resolving issues during the development phase through testing helped avoid expensive post-release bug fixes and maintenance. As a result, the client could save on development and support costs. - The application was successfully released and continues to progress in the market. The first step in the implementation process was gathering all the requirements. The team worked closely with the client to understand and document their needs and expectations. Manual Testing Planning and Setup Configured test environments and installed necessary tools, browsers, and software required for testing. Set up devices and browsers to ensure cross-platform compatibility. Followed the necessary steps, inputting relevant data as required. Compared actual results against expected results and noted any discrepancies. Conducted regression testing to make sure fixed issues did not introduce new problems every sprint. The team used BrowserStack to test the application on various browsers and platforms, identifying and fixing compatibility issues. Our team documented over 50 bug reports throughout the testing process to provide clear and comprehensive documentation. Reviewed the overall testing results to ensure coverage and completion. Shared testing results and insights with project stakeholders. Continuous communication with the team and stakeholders addressed any quality improvements or changes in the application. We completed these steps to elevate the quality and performance of the project in the E-commerce industry. - Manual testing - Functional testing - System testing - Integration testing Have a project for us? Let’s build your next product! Share your idea or request a free consultation from us.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00711.warc.gz
CC-MAIN-2024-18
4,699
39
http://www.photopost.com/forum/classifieds-installation-upgrades/143334-ad-not-uploaded-successfully-print.html
code
The ad was not uploaded successfully! i think i have problem with images in the Classifieds, when im uploading the images its says the following error : Warning: imagejpeg() [function.imagejpeg]: Unable to open '[path]/classifieds/data/2/thumbs/images.jpg' for writing: Permission denied in [path]/classifieds/image-inc.php on line 171 Warning: copy([path]/classifieds/data/2/large/iphone3gs1.jpg) [function.copy]: failed to open stream: Permission denied in [path]/classifieds/image-inc.php on line 353 i have changed to GD2 AND GD1 and its same. please let me know how i can move forword. You most likely have GD2 My suggestion is to make sure your classifieds data server path is correct in global options and your data directory is 777 permissions at every level beneath it. Also per our server requirements your php must have safe mode set to off. |All times are GMT -5. The time now is 05:49 AM.| Powered by vBulletin® Version 3.8.1 Copyright ©2000 - 2013, Jelsoft Enterprises Ltd. Search Engine Friendly URLs by vBSEO 3.2.0
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704590423/warc/CC-MAIN-20130516114310-00054-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,032
11
http://helpdesk.princeton.edu/kb/display.plx?ID=9548
code
From the KnowledgeBase How to install McAfee VirusScan 8.8 on a Windows computer. Note that all University DeSC computers are automatically updated and need no intervention. This information applies to all versions of Windows computers. To determine the version of McAfee on your Windows computer, right-click the McAfee icon (the small red and white shield in the lower right section of your display) and select "About VirusScan Enterprise". Windows Download and Installation Follow these steps in order to successfully install/update your McAfee AntiVirus program. You must be logged in as a user with administrator privileges to complete these steps. - Note: Windows 98/ME computers are NOT SUPPORTED for McAfee VirusScan. - Windows 2000 is no longer supported by this installer. Remove all other antivirus software from your computer Most antivirus software will be automatically uninstalled from your computer during the installation of McAfee VirusScan. This includes the Princeton licensed version of Symantec AntiVirus. Download the installer to your desktop Using any browser, download McAfee VirusScan from: - You will need to authenticate using your Princeton University netID and password; you may need to type the netID as PRINCETON\netID Install the latest version of McAfee VirusScan - Save VirusScan_Install.exe to your desktop (Note: Do not rename the file.) - Double-click on the VirusScan_install.exe installer file. If you receive a User Account Control prompt, respond Yes - This screen will appear first, while waiting for the installer files to be copied to the computer. After the files have been copied, this message will appear. - You will be asked some questions about your status. - If the computer was not purchased with personal funds, respond to the remaining prompts. - Installation will take several minutes and run silently. - Restart the computer when prompted.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051342447.93/warc/CC-MAIN-20160524005542-00145-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,896
21
http://forum.kingsnake.com/feeders/messages/19325.html
code
3 months for $50.00 News & Events: Posted by rodmalm on May 09, 2003 at 08:04:44: I'm not positive I have a problem, but I suspect that I do. I have 12 colonies of blue rats with babies and I occasionally sell a few just weaned babies to a local pet shop for pets. Anyway, customers that have bought them from the shop are coming down with ringworm! Here's the thing, all of my rats look perfect--no missing or thinning hair, no scabs, no scratching noticed. I've throughly looked them all over and they look great. Do you think I should treat them for ringworm? If so, is there an easy way? What should I use? I am thinking of "dipping" them in a Nolvasan solution (chlorhexadine diacitate) when I clean their cages along with disinfectiong the cages with the same. (I have about 50 adults and 150 babies so treating them individually with anything like a salve/shampoo on a daily basis is out of the question.)--LOL, yeah right, shampooing rats!! Also, since I can't see any symptoms, any idea how I can tell when things are better? Lastly, does anyone know if a black light purchased at the local hardware store ($10) would work the same as a "woods" light? I understand some species of the ringworm fungus will illuminate under the right light and I don't want to spend $300 for a woods light if a normal black light will suffice. I already bought a $10 black light, and while the rats look amazing under it, I can't see anything that looks like it is glowing(except for their white fur).
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00630.warc.gz
CC-MAIN-2018-09
1,492
10
https://cryptecon.org/activities/analysis-of-blockchain-based-token-economies/
code
We assist in substantiating the design of platforms and cryptocurrencies. At an early stage, this will be done by giving comments on White Papers – both regarding the design of the underlying token and regarding the proposed business model in terms of consistency and completeness of the proposed economy (e.g. with regard to regulating money supply and incentives for using a new token). If there are serious economic issues that threaten a smooth execution of the business model, we lay them out and give our advice on how they are best mitigated. In order to conduct meaningful formal analysis, simulations and stress tests, we specify demand and supply functions (e.g. for money), so that the system can be analyzed using game theory and microeconomics or simulated in numerical software. Specifically, we identify which parameters are endogenous and can be calculated, and which parameters are exogenous. For the latter, reasonable assumptions need to be made, some of which can be derived from the business plan or a White Paper. We then test the stability of the proposed model given different assumptions on the involved players’ behavior, trends and shocks for key parameters. For example, we test whether speculators who change their behavior depending on expectations on future developments may threaten the desired exchange rate equilibrium.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489304.34/warc/CC-MAIN-20190219020906-20190219042906-00273.warc.gz
CC-MAIN-2019-09
1,357
2
https://archive.sap.com/discussions/thread/3885699
code
GR with service and partial invoices I have some GR in service with ML81N and partial invoices.And accounting is not equal between GR account and Invoices I don't want to modify the quantity in GR because for that i have to make a MR8M on all MM invoice Is there a solution for that ? Thanks for your ideas. Can you ask business and yourself- why business do GR-2000 and Invoice only 1000 but what about balance value 1000? The accounting entries: Service consumption account: Dr( 2000 + ) GR/IR clearing account: Cr(2000 - ) After Invoice posting Vendor account: Cr(1000- ) GR/IR clearing account: Dr( 1000+ ) You can have question - how will handle balance value 1000 for invoicing Are you going to invoice posting again for 1000 Any other alternative from your side?
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202131.54/warc/CC-MAIN-20190319203912-20190319225912-00265.warc.gz
CC-MAIN-2019-13
769
15
https://community.mendix.com/link/space/widgets/questions/103779
code
Unable to load external module in mendix widget or application Sjoerd van Bavel Is it written in Dojo (Custom widget) or React (Pluggable widget)? I have encountered this error many times writing a Dojo widget. You have to be sure you are importing the library in the correct manner with AMD. Please verify if that is the case. Next to that Mendix bundles it’s widgets whilst deploying with webpack, clustering all widgets in one file. If your widget is referencing some global variable also defined in another widget you can encounter the multiDefine error. I have experienced this many times because multiple widgets were loading the jQuery library and defining the jQuery variable $ multiple times. EDIT: Since you are using Dojo, I would suggest to try and solve your problem in the following manner First try to find a AMD supported resource of your library file and with ‘Bundle widgets when running locally’ setting try to get it to work If this fails, try the ‘Check widgets’ feature in Studio Pro. This should tell you there is a problem with your widget If an error occurs, the error can be found in the log file, to be located at: \deployment\data\tmp\dojo\build-report.txt
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475727.3/warc/CC-MAIN-20240302020802-20240302050802-00167.warc.gz
CC-MAIN-2024-10
1,194
10
http://www.frontpagemag.com/2010/jlaksin/david-brooks-obama-is-the-pragmatic-leviathan-nytimes-com/
code
When I was in college, I was assigned “Leviathan,” by Thomas Hobbes. On the cover was an image from the first edition of the book, published in 1651. It shows the British nation as a large man. The people make up the muscles and flesh. Then at the top, there is the king, who is the head and the mind.
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115865430.52/warc/CC-MAIN-20150124161105-00124-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
305
1
http://bugcrap.com/winfast-pvr2/guide-winfast-pvr2-cannot-find-any-available-wdm-device.php
code
I have tried uninstalling/reinstalling the a problem with my computer overheating. Well does anyone know what application/filetype ur crucial data back. I recently rebuilt my computerfolders (accidentally) containing about 4Gb worth of stuff.I upgraded to XP wdm single 512MB stick. I guess you need sata hd and a sata controller card. Your system came available http://bugcrap.com/winfast-pvr2/guide-winfast-pvr2-cannot-find-any-video-capture-device.php freezing, stalling etc... any Wrong, his Happy Feet cd and you wish you can call them at <removed>. Its suddely appeared available at getting has SATA 3.0gb/s connections. Is anyone else seems to have disabled all my usb ports! Of course there is winfast to some websites this means there is a problem with the graphics adapter.Which would line up about right with what you are seeing. Reseating the jumpers fixed modem, and a linksys wrx54g wireless router. Ive given it into a shop now...seems likeram speeds etc etc. I've tried tons of things, but can't cannot 1 - Which ATI manufacturer should I go with?No "booting from cd"; or anything Iwill fit but, pictures online can be deceiving. Hello, Lately I been having Hello, Lately I been having Hey all, I'm hoping that someone can shed see here with 128MB SDRAM.except for my hard drive.The Kingston 256 MB with Kingston and Emachine compatability? You can search this forum to fins cannot Ice Age The Melt Down has issues.Use the Device Mgr to disable the wireless and then reboot. According any where then why don?t you go recovery labs.Is there an inherent conflict poor little guy can't wait to play them. I would suggest to give them a shot?Iffor any help. But I recently came across pvr2 trying to fix connection problems for a friend.The blow out the HDmight want to look into a 450 [at least].I really need to finish some coursework pvr2 Professional from Home Edition.You might want to measure the whole thing out manually. http://bugcrap.com/winfast-pvr2/fixing-winfast-pvr2-cannot-find-any-available-bda-device.php winfast coming across these? I hope somebody could and install Windows XP Pro.Also i want to know about these d.o.t options ( private,sergeant,captain etcto upgrade my graphics card from the Radeon X300 I currently use. Is it something bad? I desperately need drivers but the problem persists.Where'd you get the card? i want wdm comes up Access is denied. I disabled one of them, and it mine in a jiffy! The mobo I was lookingcard remody this problem?But afterall this IS an Emachines :blush: I bought this cannot I've narrowed my choice down to ATI's X1950 XT.They use an terayon tj 715x cable my wits end. Will a PCI any See my final post in HD using GHOST or Acronis True Image first. help from recovery labs.Now the file Hp_owner hes not to sure too.:s any ideas? Whats the best type for storing & Using VM-Ware Machines on..?? anyone http://bugcrap.com/winfast-pvr2/guide-winfast-pvr2-cannot-find-video-capture-device.php everything listed in the guide.I'm not aware of any free http://www.tomshardware.com/forum/267776-33-cannot-audio-leadtek-winfast swap if for an IDE version.What I need is a program to find using a WD Raptor 10k RPM HD (Serial ATA150). any that Primary is for the OS but... Primary: Dynamic: Logical: I presume to overclock it and increase the game speed. I have tried pulling out RAM both KVR133Q/256R memory cards for a total of 512.Thanks in advance cannot can speed up this card?Thanks It's possible that your Emachines motherboard some light on to the cause of this error. After combing through product reviews and benchmark tests,a Lab called <removed> Inc.Graphics I'm sure, pvr2 ) So, any suggestions? Is ALC 888 that bad?My brother recently deleted one of myKVR133Q/256R should work fine...I have two questions for you all: on my flash disk. If you are failed get your drive repaired http://bugcrap.com/winfast-pvr2/fixing-winfast-pvr2-cannot-find-bda-device.php some other DIY data recover program or lab.Also tried changingfor prior version of XP Home.Mark Best bet was to backup that recover the lost files WITH the directory/file-structure intact. He got these for Christmas and the diode so I was forced to shut it off. I tried to include I did everything I could, reinstall to installing the driver. Any help woulddetails Also, they are all out of Mac computers.What are your system specs? I am currently can go into my bios, everything shows up fine.. I'm here atclue please let me know......... The alarm kept going off on the cpu this wierd icon I have highlighted represents?? If you can return the drive,the thread for slightly more details. available Is there anyway i will only allow you to use 1 memory slot. find From the pictures online I believe the cardrun on the 300 board with no issues. This only happens when I try to install sticks and even swapped video cards. Try installing abe greatly appreciated. I have recently bought a 300g DR program, which is worth to consider.See my final post in the thread for slightly moretonight and this is driving me mad! Check out the above link. I am answer please..??? I figured this would run what he has for cds. I was just wondering if my 150 would any 2GB mini sd card just before christmass for my PDA phone ppc-6700. winfast I even gotnot a AGP slot. pvr2 You are correct: 350 Watt PS but, you I hope you get my lost data. Anyone with half a help me, thanx, Phil.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512750.15/warc/CC-MAIN-20181020121719-20181020143219-00404.warc.gz
CC-MAIN-2018-43
5,400
19
https://vespia.io/blog/deniz-arda-aslan-connecting-the-dots-of-data-vespia
code
Deniz Arda Aslan: Connecting the Dots of Data | Vespia Share this article on [UPDATED ON DECEMBER 1ST, 2022] My name is Deniz Arda Aslan, I am from Turkey, currently working as a Data Science intern at Vespia. My data science journey is all about connecting the dots of my life and I am proud that Vespia is a significant part of it. During my studies in Industrial Engineering, I learned how much I enjoyed incorporating technology into my work. Something else I learned about myself is that I always will try to make sense of data, no matter where I go, no matter what I do. There is a particular quote said by Steve Jobs, CEO and Founder of Apple and Pixar, during his commencement speech at Stanford in 2005 that really inspires me. It goes like this: "You can't connect the dots looking forward. You can only connect them by looking backward. So you have to trust that the dots will somehow connect in your future. You have to trust in something -- your gut, destiny, life, karma, whatever." - Steve Jobs When I look back at my journey to where I am now I realize that I am here as my own dots are connecting, and working at Vespia is not a coincidence on this path. In this blog, I will briefly explain the factors that shaped me into the Data Scientist I am today. Firstly, I will mention that I’m a big fan of the Python programming language, which I learned before I started university. Secondly, I greatly enjoy working with data. I’ve always been passionate about using data effectively, making it meaningful by showing on a dashboard, analyzing, and presenting it to my team while brainstorming ways of utilizing the data to improve processes. Another factor that guided my career today was my graduation project. I remember that my Academic Advisor suggested conducting research by using the data in manufacturing. The truth is that this project was a dot that led me to experience this field even deeper by working intensively on data science. Last but not least, bringing all my interest and knowledge to Vespia is what makes me get up in the morning nowadays. What I do in Vespia Customers will always look at the output. Good output in Vespia means presenting detailed and accurate data to customers. As a Data Science professional, I am continuously working towards this goal. At Vespia, each week brings different jobs and new tasks for me. Sometimes, the work includes data sciences, and other times, new technologies. What I usually do is develop tools and analysis for more effective use of our data and in addition, I work on predictive models. Questions like, “What do we need?”, “How can we implement this structure?” are the ones I ask the most in my day-to-day at work. That is why I often work on theoretical structures. I research how to build the infrastructures of the technologies and plan to develop them with my team at Vespia. Before I got to know Vespia, my idea about business verification was quite basic. That was not a problem for the company and my colleagues to assist me in my onboarding and continuous training about concepts related to Know Your Business (KYB), Know Your Customer (KYC), and Anti-Money Laundering (AML), compliance and engineering trust. A special highlight goes to our Product Manager, Artem Sherbynka, who has been patient enough to provide detailed explanations on all these terms, which ultimately helped me understand this RegTech world more in-depth and put my data science skills in the right direction. Anyone from our team can relate to the fact that the learning curve in Vespia is quite fast. This is one of the things I like most about the company! Personally, the team is one of the most important factors for me when working in a company. Vespia has a small team but its team members have been carefully selected and that is what makes it so strong. When I have a technical problem, I always get support from our developers. Also, everyone is helpful at any time and open to communication considering the fact that the team is fully remote. This has helped me adapt much quicker and smoother. Another reason I'm at Vespia it’s because of its high potential, and its continuous growth. Also, working at the company makes it possible for me to take more responsibility and see other processes at the same time. The factors that attracted me the most to Vespia are the opportunity to constantly learn and grow, the sense of belonging to something bigger than myself, and because I am proud of building trust in our vision with the help of data. What makes my heartbeat a little faster is... I've always enjoyed building things, especially manual things like cooking, crafting, and even writing. Here, I include the programming in the physical things. Creating, and innovative work are the things that excite me. This is why I enjoy developing our product so much nowadays. I find it both exciting and satisfying to contribute to developing a product our customers love, which, at the same time, helps them optimize their work. What is Next? At Vespia, we work every day to build a great product for our users. We have very specific plans and studies, especially on the analytics aspect. We are working on predictive and classifier models that will enable our users to make better predictions about companies. We also continue to add more data points to improve our data quality and continue to design a great experience for our users. Interested in learning more about Data Science? Connect with Deniz on LinkedIn! Enjoyed reading about our people? Get acquainted with Elena, our awesome Growth Hacker!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473598.4/warc/CC-MAIN-20240221234056-20240222024056-00571.warc.gz
CC-MAIN-2024-10
5,589
23
https://hexdocs.pm/casconf/Casconf.Loader-function-load.html
code
You're seeing just the function load, go back to Casconf.Loader module for more information. Loads and casts a specific configuration item It will raise the following errors: - Casconf.CastError (in case the supplied value couldn't be converted to the given data type) - Casconf.NoValueError (in case none of the loaders have found a value)
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00579.warc.gz
CC-MAIN-2021-21
340
6
http://tfideas.blogspot.com/2010/04/fixing-gvt-by-giving-politicians-pay.html
code
I rail on politicians, a lot, and periodically come up with ideas for how to get better quality, less groupthinky congresspeople. Some of my ideas I'm quite proud of - sealing voting records until a few months before elections to weaken political coalitions, for example, or making all political contributions anonymous to reduce reciprocity. However, turns out I may be very, very wrong about one of my ideas. I wanted to make politics a lower-status job - living in barracks, unpaid, etc - to reduce the number of "politicians for life". If politicians saw it as a term of service instead of a cushy position, i thought maybe they'd do more work towards good policy instead of good politics because reelection mattered less. However, turns out that a pair of studies (one in Brazil and one in Italy) has indicated the reverse. Making politics higher paid draws higher-quality candidates in terms of education and in one case, experience and previous profession. Both articles concluded that these better politicians were more efficient and did a better job overall. Of course, the type of people who go into politics in Brazil and Italy may be different than in America - we have a lot of millionaires in Congress as it is - so I'm not sure if it means I'm definitely wrong. However, it definitely raises the possibility.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00324.warc.gz
CC-MAIN-2018-30
1,323
5
https://forums.macrumors.com/threads/applescript-1708-error.870041/
code
hey everyone, I just started learning applescript yesterday, so bear with me. I wanted to write a script for iChat which opens an IM window every time a specific buddy signs on. Here is my script: tell application "iChat" if (buddy became available "hetu1989") then show chat chooser for "hetu1989" end if end tell When my screen name "hetu1989" signs on, my iChat gives me the following error: I tried to find out how to get rid of this error, and I saw that some pople had also encountered it, but those threads didn't help me much. So, any help will be appreciated. Thanks a lot!
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00383.warc.gz
CC-MAIN-2018-43
582
1
https://support.cloudcheckr.com/copy-an-arn-resource-to-your-account/
code
After you have created and attached a secondary policy to your cross-account role, you need to copy an AWS Role ARN to your CloudCheckr account. - From the dashboard, click Roles. - Login to the AWS Management Console. - Scroll down to the Security, Identity & Compliance section and select IAM. - In the Search text field, type the name of the new cross-account access role to filter the list. - Click the name of the new cross-account role from the list. - Click the Copy icon next to the Role ARN. - Launch CloudCheckr. - Select an account from the list. - From the left navigation pane, select Account Settings > AWS Credentials. - In the AWS Role ARN text field, paste the role ARN value you copied from AWS. - Click Update. How Do I Access the IAM Dashboard? The AWS services page opens. The Welcome to Identity and Access Management screen displays. The Roles page opens. The Summary page opens. Notice the Role ARN value at the top of the page. ARN values use this format: arn:aws:iam::YourAccountIDHere:role/CloudCheckrRole. For the purposes of this procedure, we have masked the true ARN value. The Edit AWS Credentials page opens. The Use a Role for Cross-Account Access tab displays by default. Cloudcheckr will begin populating your account with data. Depending on the size of your AWS account, this can take a few hours or more. Note: You can access specific permissions to allow CloudCheckr’s Automation features to work here. You can add these permissions as another policy to your cross-account access role. Preparing Your AWS Account
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210463.30/warc/CC-MAIN-20180816054453-20180816074453-00037.warc.gz
CC-MAIN-2018-34
1,553
23
http://mail.openjdk.java.net/pipermail/2d-dev/2012-August/002666.html
code
[OpenJDK 2D-Dev] request for review: 7150594: VM chash in JCK api/java_awt/Image/ConvolveOp/ tests for 64 bit jdk8 on linux. andrew.brygin at oracle.com Thu Aug 2 14:36:55 UTC 2012 could you please review a fix for 7150594? This problem is triggered by the fix for CR 7113017. In particular, this fix replaces the malloc.h with stddef.h in mlib_types.h. This change leads to compiling mlib_sys.c without forward declaration for memalign() routine, and cause following warnings: mlib_sys.c:96: warning: implicit declaration of function 'memalign' mlib_sys.c:96: warning: cast to pointer from integer of different size This cause the problem on systems where size of integer is less than size of pointers: the pointer value is clamped, and usage of the clamped pointer causes the observed crash. Proposed solution is to include malloc.h header directly to mlib_sys.c. Please take a look. More information about the 2d-dev
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00103.warc.gz
CC-MAIN-2022-05
920
16
https://etutorials.org/Programming/secure+coding/Chapter+6.+Automation+and+Testing/6.5+Case+Studies/
code
In the following sections, we describe a few situations we've dealt with in our careers that illustrate various scenarios that are relatively common in industry. We provide insight here as to how we approached the problems and the types of testing, tools, and methodologies that we used along the way. We've tried to provide some insight into the rationales we used in making our various selections. Several years ago, we were asked by a telecommunications company to perform a "paper review" of the security architecture of a so-called full services network (FSN), a video, audio, and data network that was to run on top of an Asynchronous Transfer Mode (ATM) infrastructure. The design was intended to provide bandwidth on demand to their customers for a wide range of these different services. In discussing the project goals and objectives with the company, we learned that their chief concern was in preventing people connected to the FSN from being able to fraudulently provision services (and not get charged for them). Because service theft represents their primary source of lost revenue, this seemed logical to them. We started by reviewing the network and server architecture in-depth, looking for flaws in the design of how data or administrative traffic would traverse the network. Of particular attention during this part of the review was ensuring that the identification and authentication (I&A) of all customers on the network was sufficiently strong to prevent a customer from forging his identity (and thus stealing services). We spent days poring through the documentation and came up with very little. Next, we started to concentrate on how network circuits are provisioned by diving deep into the ATM architecture. This time, we concentrated on transport-layer network protocols: could they be spoofed, forged, or otherwise compromised? Here too, we found that the company engineers who had designed the network clearly understood the technologies that they were implementing and had done a superb job. At this point, we were nearly ready to declare defeat (at least, from our perspective), when we decided to look at the project a bit differently. Instead of looking simply for flaws in how the network technology was designed, how about looking at the situation from the outside in? How had attackers historically attacked data networks? How would that impact the ATM underpinnings? Because one of the services that would be available to the end customer was going to be data communications, we decided to assume that the customer is connected to a data circuit and is otherwise blissfully ignorant of the underlying ATM networking. So this time, we looked at approximately ten previously observed attacks on IP networks, ranging from ICMP data flooding to denial of service attacks. From our theoretical model, we asked: what would happen to the ATM side of the network in the face of those IP attacks? What we found (remember that this was purely theoretical) was that it was likely that many of the extant IP-level attacks would wreak havoc on the underlying ATM network. In short, the designers of the ATM infrastructure had done a great job of addressing the domain they were most familiar with but had failed to consider the ramifications outside that domain. When we presented our findings to the design team, they were quite surprised. Some attacks that had been observed on the Internet for more than a decade were entirely new to them, and indeed, the engineers had not adequately considered them in their design of the FSN. So, they went back to the proverbial drawing board to make some adjustments to that design. This case study teaches several lessons. The following are especially important: It's important to include domain experts in the design team that can speak to all of the security threats that a design is likely to face. It's equally important that the testing team be able to think like an attacker in reviewing the application. Both of your authors were involved in a large-scale review of dozens of legacy applications at a major corporation. The object of the review was to analyze the company's production data-processing environment for security vulnerabilities. The company had recently undergone a massive restructuring of many of its business applications, transitioning them from traditional database applications into web-enabled applications with more modern front ends. The security officer of the company was (rightfully) concerned that they had inadvertently introduced vulnerabilities into their production business systems by going through this restructuring. So, with that concern in mind, we set out to review most of the applications for their levels of security. The approach we took evolved over the life of the project for a number of valid and different reasons. We started out deciding to use these methods: We undertook several external and internal network scans for OS-level vulnerabilities and misconfigurations. These scans were good at finding operations-level problems, but it turned out that they failed to address the business impacts of the applications under review. Similarly, we ran numerous host-level reviews of the OS configurations. These pointed out more vulnerabilities and misconfigurations in the application servers, but also failed to hit the business impacts of the applications themselves. We briefly considered going through a static code review but quickly dismissed the idea for a variety of reasons. First and foremost, there were simply too many applications; the undertaking would be too enormous to even ponder. Second, the tools available for doing static code analysis were few, and the languages we needed to evaluate were many?and the tools were unlikely to find a sufficient set of real problems in the code. The testing that we did was useful to a degree: it pointed out many vulnerabilities in the applications, but those vulnerabilities turned out to be primarily those in the operating environments of the applications, not the applications themselves. The results weren't that useful, though, because they didn't provide the application owner with a clear list of things to correct and how to correct them. Further, they didn't in any way quantify the business impact or risks to the corporation. Thus, although we could cite hundreds of vulnerabilities in the environments, we couldn't make a sufficient business case for the company to proceed with. Back to the drawing board! Next, we decided to interview the business owners, software designers, and operations staff of each of the applications. We developed a desk check process in which we asked each of these people the same questions (see the sidebar Legacy Application Review Questions for examples) and provided normalized results in a multiple-choice format. That way, the results would be quantifiable, at least to a degree. Legacy Application Review Questions During our discussions with the application business owners, software designers, and operators, we asked a series of questions?some formal and some ad hoc. Here are some examples of the questions that we asked: In conducting these interviews, we quickly recognized how important it was for us to make each interviewee feel comfortable talking with us. As we discussed earlier, it's important to create an atmosphere of wanting to find flaws in code in a way that's entirely nonjudgmental and nonconfrontational. In this project, we helped the interviewees relax by adopting a nonthreatening manner when asking questions likely to raise sensitivities. Even though our questionnaires were multiple-choice in format, we frequently went through the questions in a more narrative manner. At one point, we experimented with distributing the questionnaires and having the various participants fill in the responses and send them to us, but we found it more effective to fill in the forms ourselves during the interview process. This approach turned out to be very useful to the corporate decision makers. With the added information coming from our interviews, we could demonstrate business impacts much more effectively, and we could essentially grade each application on its degree of security. What's more, we could provide the business owner and the software developer with a clear list of things that should be done to improve the security of the application. The lists addressed operational factors as well as design issues with regard to the application code itself. (It did, however, stop short of reviewing actual source code for implementation flaws.) Though our business-oriented approach worked best in this case study, a more technology-oriented approach is frequently more useful to the actual code development team during application design and implementation. That's because a technology-oriented solution can provide the development team with a list of specific actions to take to secure the technology components of the system, and that's exactly what they're likely to be looking for. The business approach did a great job in this project, though, at meeting the requirements of analyzing the security of the legacy applications and assessing the potential impact to the corporation of a compromise in the security. This case study teaches several lessons. The following are especially important: When confronted with the volume of applications studied during this project, it is not always feasible to conduct reviews down at a code level. Instead, the designs can be reviewed by interviewing key personnel, and the operational configurations can be tested empirically by conducting network penetration tests. While not perfect, this approach represents a reasonable compromise of time and cost. A process like the wholesale "web enabling" of older applications may lead to additional design-level vulnerabilities in an application that were absent from the original design. When making such a sweep, you should treat the changes with at least the same degree of security diligence that you applied to the original design. Don't treat such a project as a simple application maintenance procedure. In one web portal design project we participated in, the development team had some rather substantial security hurdles to overcome. Among other things, the portal was to be used to provide highly sensitive reports to clients of the company developing the software. Furthermore, the company was a security service provider, so it had to exercise the very best in secure software practices to set an example for its clients and to protect its reputation. In the following sections, we've included the story, told by the two major developers themselves (with as little editing by this book's authors as possible) of what they actually did to develop a state-of-the-practice secure web portal. We needed to provide a secure, reliable, and easily accessible mechanism for delivering reports to our clients. Not all of our clients had access to the encryption mechanism that we used (PGP) and, while some of our clients were Windows-based, others used Unix. We knew that all of our clients had access to the Internet, so the logical solution was a secure web-based portal; a portal would allow us to have a standard methodology for delivering reports to our clients. In addition to being web application security testers, we had also written a few internal administrative applications ourselves. Unfortunately, none of the applications we had developed had needed the degree of security required by our proposed Internet-accessible portal. On completion, the portal would house our customers' most critical data, including reports of all of their security vulnerabilities. The fact that the application was going to be accessible from the Internet raised a big red flag for us from a security perspective. Anyone connected to the Internet could potentially attack the portal; therefore, we needed to make security a top priority. Because both of us were traditional engineers, we started with an engineering approach to this process (envision, define the requirements, develop a design, implement the design, then test and retest). We wanted a web portal that securely allowed users to view reports, contact information, and other client-specific information. First, we had a brainstorming session to identify what the project needed to encompass, who needed to have input, and what resources could be allocated. We needed to define the functionality requirements, so we obtained input from the project managers as well as feedback from our clients. Next, we drafted a document to define clearly what we were trying to do. We then asked ourselves what the security requirements should be. Because we both had tested a number of web applications in the past, we came up with our own list of security requirements, but to be complete we also searched the web, newsgroups, and mailing lists for recommendations. The www.owasp.org site was particularly helpful. When we started to design the portal, our principal concerns were authentication, session tracking, and data protection. Authentication is the "front door" by which a user enters an application. The authentication must be properly designed to secure the user's session and data. Our authentication design was based entirely on the security requirements that we defined in the previous stage of development. However, after a round of initial prototype testing, we found that our original requirements did not include proper error checking to avoid SQL injection attacks, so we added the required error checking to secure the authentication of the application. For session tracking, we had seen a number of off-the-shelf implementations, but we felt that we could do better. We liked the idea of having the user reauthenticate on each page, so we came up with our own session tracking mechanism. Our design did require users to have cookies set on each page. Although that increased the overall workload of the application, we thought that this overhead was worth the extra security it provided. We based the design of the reauthentication mechanism entirely on avoiding the poor practices that we'd seen during prior application tests. Finally, we wanted to come up with a database scheme that protected our clients' data. We'd seen other web application designs that allowed one user to access another user's data, simply because the user's data resided in the same database tables. It was critical that this application protect each client's data, so we chose to isolate client-specific data into separate tables in the database. This also gave us the option to make database permissions granular to each table, and that granularity helped protect our client data. Although there is a cost of having specific tables for each client, we thought the security benefits outweighed the cost of having more tables. Once we had the blueprints for our portal design, we started the actual implementation. We needed to decide on the technology to use for the web server, database, and middleware. In addition, because not all web servers, databases, and middleware packages are compatible with each other, we needed to consider products that would work in concert. Because the web server is the remote end that a user sees, we decided to choose that product first. We needed a product that had been proven to be secure, had been well tested, and had been used in the field for some time. Our basic options were Netscape Enterprise Server, Microsoft's Internet Information Services (IIS), and Apache's HTTPD Server. Our primary concern was security, and our secondary concern was cost. Naturally, other attributes such as product stability were also important. Because of the number of vulnerabilities and required patches associated with Microsoft's IIS server, we decided against that product. Both Netscape's Enterprise Server and Apache's HTTPD Server have a history of being quite secure and stable. Because in this case cost was a secondary issue, we chose Apache. Next we needed a platform on which to run our web server. Fortunately Apache runs on most operating systems. So again, we returned to our priorities: security, cost, and stability. Linux offered a secure reliable platform for free, and we had ample experience with securely configuring Linux. We also considered the various BSD-based operating systems. In the end, we decided to go with Linux, primarily because we had more experience with that operating system than with any of the BSD family. For the database implementation, we figured that there were four realistic options: Oracle, MySQL, PostgreSQL, and MS-SQL. Again our priorities were security, cost, and stability. All of these databases have the ability to be properly secured. Because PostgreSQL was a fairly new player in the large-scale database deployment arena, we decided not to use it. For consistency with our operating environment, we decided that we wanted the database to run on the same platform that our web server was running on, Linux. Because MS-SQL does not natively run on Linux, we eliminated that database as well. Now we were down to MySQL and Oracle. Fortunately, we had an Oracle infrastructure available to us, so that's what we chose. Oracle can be securely configured as a stable environment, and because we had the licensing available to us, cost was not a major issue here. Next we needed something running on Linux that could glue the web server (Apache) and the database (Oracle) together. PHP meets these requirements; it can be securely configured and is free. In addition, we both had experience programming in Perl and PHP. PHP is derived from Perl but is tailored to work with embedded HTML tags, so it was a natural choice for us. Once we'd chosen our implementation platforms, we needed to make sure that we could properly configure each of the elements and still implement our design. For our PHP configuration, we cross-referenced some checklists (specifically, http://www.securereality.com.au/archives/studyinscarlet.txt) to make sure that unsecure options were disabled. Because there were only two of us on the development team, we both reviewed all code implemented to ensure that we were using the best security practices. We also found that the checklists for the PHP configuration had a number of PHP language do's and don't's. In implementing the code, we supplemented our own programming knowledge by following these guidelines. During this phase, we ran our common buffer overflow tests. Even though buffer overflows aren't problematic in PHP, we wanted to test the application as a whole; even if the front end didn't overflow, the MySQL back end still could. We also configured the database to be able to handle data of a certain size and to limit users from filling the database. We made sure to check all code exit points so that the application always terminated to a known state. If we hadn't done this, the application could have left database connections open and possibly caused a resource denial of service condition. Luckily, the code was short enough that we could visually review the code for data validation. All input that was accepted from the user was first filtered. Had we not checked the code for data validation, the application could have been vulnerable to a SQL injection or cross-site scripting (XSS) attack. Finally, we had our product tested by other security experts within the organization during an independent evaluation. The testing team was provided with five accounts with which to test the application. The objective of the test was to identify any vulnerabilities within the application, operating system, or network configuration. Prior to initial deployment of the application, we had the OS tested with a thorough network penetration test from a well-known and well-trusted security testing team. They identified some additional low-level security issues. Once we'd put these additional security measures in place, we retested the entire application. Only after we'd addressed all security issues was the application deployed. Fortunately, we had the foresight to build the security requirements into the beginning of the process, which made correcting minor issues much cheaper than it would have been. Security testing did not stop here. It continues on an ongoing basis. Every week, the application is scanned for basic vulnerabilities, and every quarter, the entire application is retested. In addition, all passwords are cracked once a week to find any weak passwords. With this project we basically needed to make security decisions through all phases of the development process. We consistently had to refer to newsgroups, vendor web sites, and security web sites to make sure that we were making intelligent decisions at each step in the development process. We found that secure coding practices alone did not provide enough protection and that we needed to scrutinize all elements of the application. This case study teaches several lessons. The following are especially important: When security is such a high priority for a project from the onset, many of the design decisions are driven primarily by security requirements. It is vital to exercise a great deal of caution in designing the identification and authentication (I&A) system, as well as the state tracking and data compartmentalization systems. For this type of application, the implementation and operation of the system should not treated as static; weekly and quarterly tests ought to be conducted to test for newly discovered vulnerabilities on an ongoing basis. The design team needs to consult numerous external sources for design ideas. It is worthwhile to divide the engineering team so that some of the engineers concentrate on the design and implementation of the code, while others are called on to test the software from a zero-knowledge perspective. The net result is a reasonably objective testing of the application.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00675.warc.gz
CC-MAIN-2022-21
22,016
58
http://nmdportfolios.org/tscontras/2015/12/14/150/
code
My lasercut was bad. That’s a bit of a blanket word, because there’s a lot to take into account. Most of my process was actually in check, but some of my execution did not follow through and really compromised the project. I put quite some time into the preparation of the laser cut, I spent a few hours creating a 3D model of a deer head in maya. Taking a 3D class and learning what I have learned really makes me feel like I had an edge on this project, which gave me a lot of confidence from the start, so I was actually very optimistic about this project. I should say early on that I do plan on either recreating this project, or just putting more effort into the 3D print project. So, while my lasercut was executed poorly, I am not very discouraged from the experience and in fact I’m just wondering what will work next time. Whether or not I’m going to re-do the cut is up in the air right now, really all I need to do is find a thinner material to do my cut, I think a lot of my failure came down to the material I used, plus the expectation I had of it. Matboard is a sturdy material, and that is totally applicable to certain projects I’m sure, and I assumed when I picked it up and felt it, that it was the perfect material for my project as well. the problem was, was that it was not at all the case. My model was too small, and had to many small polygons that needed to be folded to work with the sturdy material. Getting one bend in was simple, but getting a complex shape going required me to more or less break the mold that I had been given (usually that’s a good thing, but not in this situation) I started to use the marks on my mat board as more of a suggestion as to where to bend the mat board rather than strict instructions to where to fold the mat board. If I could get my hands on some kind of cereal box material, hell, maybe I’ll just use a cereal box to cut a new version of the deer head. I think it was not only my material and expectation, but my patience that failed the project. I didn’t fold the mat board and only realize at the end of the project that it was failing. As I was trying to haphazardly tape the project together, I noticed issues, but I couldn’t bring myself to go back to the IMRC and just cut a new deer head template up and fold it the correct way, so I need to be more willing to just bite the bullet and take steps back sometimes to take steps in the right direction. Looking forward to my next project, I think I should get by pretty easily with my 3D ability, but hopefully I’ll get to really challenge myself in my final project which will hopefully incorporate a combination of 2D and 3D.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244353.70/warc/CC-MAIN-20200926165308-20200926195308-00248.warc.gz
CC-MAIN-2020-40
2,668
2
https://www.nycravers.com/half-life-ambien/
code
Buy brand half life ambien xr 200 14, 2015 - ambien cr 12.5 mg sizes. Up to take care of ambien cr half life free bonus pills with sleep aid that work! David benjamin testified in fact, life-threatening ambien er half. Complete research institute 99 canal center plaza, secure up to 50% off. Just by the fda's announcement today will stay above the gi tract and its longer half-life of 2.8 hours. Healthy people, quality medications with adhd with this what you feel. Severely addicted people with unanswered searches best sale. Female ambien half life urine online free Pappas on thursday the time price best erection pills with ozzz sleep aid. In life find latest medication is this period check more instock. Comment - be worth taking a specific drug immediately and half life. Complete research institute 99 canal center plaza, or controlled-release version ambien cr, and ambien cr indication is ambien resistant insomnia - nortriptyline and effectively. Never, quality of 7: cut the half-life time the increasingly frantic pace of ambien 5mg half life. Lowest prices discounts, 2017 - types of ambien lunesta half life we have special reduced price is prescription required ambien: indications, quality. Rebound insomnia pdf best price best price best cheaps. Free samples for it solves the total in the half life of 1.5 to 20% off. Org has a 10 mg half life urine it starts affecting a. May find that they re getting a short half-life was not be helpful while off. 87 generic medications may trigger a doctor is that help you want something special in particular is 9 best price. Ambien sleep apnea treatments 2016 - zolpidem, for your prescriptions ️ ️ ️ ️ half life urine it ️ ️ ️ how long half life. 2018 is limited check more free samples for your health wellness products that with a. Santa monica, guaranteed shipping, cheap prices and has a person's quality of ambien, price,. M a short half-life 17% was the objective of ambien which is better levitra or viagra life of ambien 5mg half life of ambien half life of your health problems. Apr 18, but work and those with every order! 2018-2-22 dosage for sleep apnea treatment of zolpidem ambien, cheap prices. Take care of ambien xr is a slow breathing, secure. Research institute 99 canal center plaza, reviews info ️ ️ ️ ️ best sale. 2017-10-26 the half life, side half life ambien generic buy online. But effectiveness for all orders best online without ambien part 18009711547. Where to take care of drugs mine how long is important. Pregnancy information on their ambien extended release half life body 2018 - if you searching about best erection pills with every order! 2016-5-29 comparing antidepressant properties of ambien half life online drug information index provides comprehensive access to half in the body. In women provider or something special in times when i have a survey. Comment - topics showing results with treatments 2016 - unique remedies, reviews info. Half life ambien online shopping Severely addicted people don't accept her explanation blaming the extended release half life. Although the offer is also known as benzodiazepines and treatment in system addiction and anonymous. It reads the night zolpidem affects chemicals, 2016 - within 15, intermezzo, guaranteed shipping! Biotech roundup: 1 - the offer is this pill! Trazodone has a redistribution half-life nonbenzodiazepine 2.5 – zolpidem, how ambien buy. Lowest prices online pharmacy, stimulants, 2013 - a short half-life is important check more up to buy. Severely addicted people wake and half life - zolpidem half-life, ️ ️ ️ ️ half life. Jogging along, reviews info ️ ️ ️ ambien swim in times when it solves the stone from ambien increasing your work! Compare rebound insomnia pdf best online pharmacy, quality, the next night and short half-life t1/2 in the experts note that work! May 27, and sonata zolpidem mr has a doctor is the drug ambien xr half life, composition, order! Refill your life online without a non-profit educational harm-reduction resource with every order! Any with every order ambien, ambien what you want to surely way of ambien half. There's a doctor is special about 2.6 hours. Healthy adults is the uses, quality, quality, price is prescription up to sleep medication is taking ambien xr half life urine free shipping. Some users have a i would have special reduced price. Is how long in the time with sleep. If anything, raid their lives, but i do more best cheaps. Short half-life, composition, ambien discusses its two buy priligy 20% off. Because of life if you're taking a doctor is staying asleep and distressing side effects ambien generic availability dosage for safe in january, order! Accumulation was not observed in the early as it. Check more best pill shop, secure and anonymous. Available in your prescriptions ️ ️ ambien half life of ambien half life ambien half life online, secure best online pharmacy, ambien, secure. Find low price ️ ️ ️ ️ ambien, cheap prices. 6 days ago - she also has over 24 hours. Pappas on prescription check more ambien mechanism of your health ambien cr 12.5 mg that work and anonymous. Biotech roundup: a doctor is ambien cr half life ambien cr,. That work in the following may occur on prescription medicines. 2018-6-26 curated list of ambien find coupons for your moments in 1992. 2018-5-23 melatonin has a doctor is limited check more up to 20% off. 4, 2005 - get when it doesn't and parcel of ambien,, 2017 in human plasma concentration. Peak and brand name of ambien half life save up to 20% off. Dec 5, 2005 - because the problem for you solve your health. Solution: zolpidem has a doctor is important check more ambien generic versions of time with sleep: dr. May find latest medication for permit streamlining into the half life. Take care of people, which means that affects your prescriptions. There's a doctor is the risk of this dyad?
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00064.warc.gz
CC-MAIN-2019-22
5,924
10
https://attitudesports.uk/rams-potent-offense-time-forget-greatest-show-turf/
code
RAMS The Most Potent Offense of All Time || Never Forget The Greatest Show on Turf Almost 20 years ago we witnessed the The Greatest Show on Turf aka The St.Louis Rams aka “The Most Potent Offense of All Time”. Let’s go over their talent starting with Kurt Warner who used to work at your local grocery store turned Super Bowl MVP. Kurt Warner dominated the NFL and was the key element for the greatest show on turf. Let’s not forget about Isaac Bruce and Tory Holt dominated the corner-backs with their athleticism and Marshall Faulk ruled the running game. The 1999-2001 Rams may be the best team of all time.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00216.warc.gz
CC-MAIN-2019-51
619
3
https://ipcamtalk.com/threads/yoosee-sd-m5-doorbell-1080p-poe-rtsp-onvif-only-66.40569/page-30
code
HiGreat progress @petervk! At the moment I am struggling to get the gmwf tool to work (something about No module Crypto.Cipher). So I haven't actually installed anything new on the doorbell yet. Also, I am new to MQTT but it seems to be something I want to implement The decompiler I'm comfortable with is Ghidra. Here is where I am thinking to patch the reboot issue (stop the increment of the DayCounter by setting it to 0x0): View attachment 85694 Other than the challenges mentioned, how would I be able to use the doorbell with two-way audio in conjunction with my Synology NAS? Do you find a solution for Two-Way audio ? Very interesting for homebridge ffmpeg to send audio
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00413.warc.gz
CC-MAIN-2021-31
679
7
https://pbxbook.com/ipoffice/ipoucdp.html
code
UDP Dialing to CS1K IP Office 8.1 registered to NRS 7.50 with CS1000 7.50 CDP (Coordinated Dialing Plan) four digit dialing to main site is successful. UDP calling to the IP Office also is successful. Dialing 1111 (assuming is in CDP) from IP Office is successful. Dialing 224 2222 from any site to reach user at IP Office is successful. When IP Office is sending INVITE request to CS1000, it is not presenting any information for phone context, causing the NRS (Network Routing Service) to treat the call as CDP. This can be viewed within a packet capture of SIP traffic as the call is being initiated. Within IP Office Manager, access Short Codes and create a new code for the UDP location code to be access. For the short code, provide the following: ➤ Code/Name: ###N; (Replace ### with the location code for UDP dialing, the N is used as a variable to allow for an undetermined number of digits to be dialed.) ➤ Feature: Dial ➤ Telephone Number: ###N";email@example.com" (Replace ### with location code for UDP dialing, the N is used to allow for any number of digits to be presented after the location code. udp and domain can be retrieved from NRS if not already known. If all DNs at location are the same length, replace N with sufficient number of x to remove wait time after last digit) ➤ Line Group ID: (choose appropriate line group) ➤ Locale: (can be left blank) ➤ Force Account Code: unchecked
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00646.warc.gz
CC-MAIN-2018-09
1,419
16
https://www.open-mpi.org/community/lists/hwloc-users/2012/01/0540.php
code
Le 30/01/2012 19:00, Samuel Thibault a écrit : > Devendar Bureddy, le Mon 30 Jan 2012 18:59:11 +0100, a écrit : >> /home/bureddy/hwloc-1.4/include/private/cpuid.h: In function 'hwloc_cpuid': >> /home/bureddy/hwloc-1.4/include/private/cpuid.h:54: error: can't find >> a register in class 'BREG' while reloading 'asm' > Could you check in the config.log that the test for buildability of > cpuid.h includes your -mcmodel option and (would be surprising) doesn't The build failure goes away when I remove -fPIC from my command line. Obviously this option is not passed to configure, so it doesn't fail there. Building the static lib only seems to work as well. ./configure CFLAGS=-mcmodel=medium --enable-static --disable-shared No idea what to do now :)
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275645.9/warc/CC-MAIN-20160524002115-00065-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
753
12
https://wiki.whatwg.org/index.php?title=Input_element&diff=prev&oldid=5257
code
Difference between revisions of "Input element" (→new date time inputs: +1) (new section See Also - time element, related proposals) |Line 51:||Line 51:| Revision as of 22:22, 7 August 2010 This article is a stub. You can help the whatwg.org wiki by expanding it. Research, data, use cases, issues, and enhancements related to the HTML5 new date time inputs The current new date time inputs cover a number of interesting and broad use-cases of various levels of granularity for absolute date and time input types: * month (specific year, month) * week (specific year, week (implied month(s))) * date (specific year, month, day) * datetime (specific year, month, day, time) * datetime-local (specific year, month, day, time, timezone) As well as one floating time input: * time (specific time, but no specific day, month or year) This set is missing a few date time inputs that would make sense in such a more complete collection of granularity and floating (non-absolute) date time inputs: * year NEW: (specific year) * month (specific year, month) * week (specific year, week (implied month(s))) * date (specific year, month, day) * datetime (specific year, month, day, time) * datetime-local (specific year, month, day, time, timezone) * month-day NEW: (specific month, day) * time (specific time, but no specific day, month or year) - year - see the time element year only use cases - month-day - see time element month day only use cases The use-cases for each of these new inputs are documented in proposals for allowing the respective levels of granularity/floating date/time support in the time element. Opinions / discussion: - +1 Tantek - any implementer that is going to the trouble of properly implementing the existing 6 new date time inputs will find it fairly easy to implement two additional variants, and this more complete model of date and time inputs will be easier to remember for web authors (fewer exceptions to remember). - +1 Andy Mabbett - Per the above, and examples/ use cases on time element. - time - the time element, related proposals.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00483.warc.gz
CC-MAIN-2021-43
2,068
22
https://wiki.deepin.org/wiki/Hardware_Probe
code
Hardware Probe 编辑 请登陆,再编辑 Creating a hardware probe allows you to find out the details of the computer's internal structure, check operability of devices and collect logs for the developers to help identify and fix hardware related problems. If the system failed to find a driver for some device in your computer, the probe will suggest the appropriate version of the Linux kernel according to the LKDDb or third-party drivers. Create a probe Command line to create a probe: sudo hw-probe -all -upload Probe for hardware ... Ok Reading logs ... Ok Uploaded to DB, Thank you! Probe URL: https://linux-hardware.org/?probe=c5063bb936 What can you do with it? Provide a probe link when asking for support or advice from your friends to avoid a bunch of additional questions about your system setup. All necessary info about your computer configuration and logs is already collected in the probe. In addition to simplifying communication with the distribution support team, a public hardware database is created on the basis of hardware probes from different users, where you can find experience of other users with similar hardware components. Debugging of the ACPI subsystem sudo apt-get install acpica-tools sudo hw-probe -all -upload -decode-acpi See acpidump_decoded extra log in your probe. Collected logs are cleared from private info. You can safely share a probe link with anybody.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00382.warc.gz
CC-MAIN-2021-10
1,403
17
https://firearmslife.com/the-social-policy-madness-breaking-the-country-ep-1555/
code
LIKE & SUBSCRIBE for new videos every day. https://www.youtube.com/c/BenShapiro The Ben Shapiro Show is sponsored by ExpressVPN. Protect your online privacy today at https://expressvpn.com/benshapiroshow Enjoy this pre-recorded short series of The Ben Shapiro Show on the social policy tearing apart the country. We’ll be back with our regularly scheduled programming next week! The Biden administration’s embrace of critical theory on both race and gender undermines fundamental American principles – and divides Americans from each other. Become a DailyWire+ member today to access movies, shows, and more: https://utm.io/ueMfc Grab your Ben Shapiro merch here: https://utm.io/uePzN Connect with me on social media: Twitter — https://twitter.com/benshapiro Facebook — https://www.facebook.com/officialbenshapiro Instagram — https://www.instagram.com/officialbenshapiro/?hl=en Snapchat — https://story.snapchat.com/p/a2bc877d-b2ed-47f5-974b-854523bbcd25 #BenShapiro #TheBenShapiroShow #News #Politics #DailyWire
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00582.warc.gz
CC-MAIN-2022-40
1,026
12
https://mo-bioscience.jobs/chesterfield-mo/phenomic-assisted-breeding-intern/6F3C12900EBA4E67BE8DD1B952F7C277/job/?vs=28
code
Bayer Phenomic-Assisted Breeding Intern in Chesterfield, Missouri Bayer is a global enterprise with core competencies in the Life Science fields of health care and agriculture. Its products and services are designed to benefit people and improve their quality of life. At Bayer you have the opportunity to be part of a culture where we value the passion of our employees to innovate and give them the power to change. Phenomic-Assisted Breeding Intern In Bayer’s Crop Science division, we shape agriculture through breakthrough innovation that helps to nourish our growing world, preserve natural resources and deliver better solutions for all farmers. Using the creative spark that comes from human ingenuity, we seek to deliver world-class innovation, set new standards in sustainability, and drive our digital transformation. Our products include high-performance seeds & traits, crop protection solutions, and digital farming tools. YOUR TASKS AND RESPONSIBILITIES The primary responsibilities of this role, Phenomic-Assisted Breeding Intern, are to: Coordinate with field testing team to develop data collection strategy; Work with a cross-disciplinary team to develop and deploy proximal sensors and remote sensing technologies for field-based phenotyping efforts; Process and analyze high-dimensional phenomic data to enable the development of predictive modeling; Leverage phenomic information to develop phenomic-assisted breeding strategies to reduce logistical requirements associated with current testing strategy and improve selection accuracy; Assist in the development of down-stream image processing, model development, and validation of model outputs; Be able to effectively communicate results and outcomes at meetings and internal conferences; Ensure completion of Phase 1 phenomic-assisted breeding initiative. WHO YOU ARE Your success will be driven by your demonstration of our LIFE values. More specifically related to this position, Bayer seeks an incumbent who possesses the following: Currently enrolled in a Master’s degree, or Ph.D. in the following fields of study: plant breeding, agronomy, phenomics, computer science or other fields of study involved in deploying sensor in biological systems for development of predictive models; Familiarity with the following; plant breeding methodologies, high-throughput phenotyping, predictive model development, and computer vision workflow; Experience working with ‘omics data and development of data processing pipelines for ingestion into predictive models; Experience with R, Python, Excel, and other data manipulation and visualization tools; Communication skills to transcend knowledge gaps between scientific domains; Willingness to work outside to develop novel phenotyping systems and to collect phenotypic data; Valid driver’s license to travel to and from fields. Bayer offers a wide variety of competitive compensation and benefits programs. If you meet the requirements of this unique opportunity, and you have the "Passion to Innovate" and the "Power to Change", we encourage you to apply now. To all recruitment agencies: Bayer does not accept unsolicited third party resumes. Bayer is an Equal Opportunity Employer/Disabled/Veterans Bayer is committed to providing access and reasonable accommodations in its application process for individuals with disabilities and encourages applicants with disabilities to request any needed accommodation(s) using the contact information below. Location: United States : Illinois : Stonington || United States : Iowa : Huxley || United States : Missouri : Chesterfield || United States : Nebraska : Waco Division: Crop Science Reference Code: 223601 +1 888-473-1001, option #5
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107887810.47/warc/CC-MAIN-20201025041701-20201025071701-00445.warc.gz
CC-MAIN-2020-45
3,710
29
https://s3alfisc.github.io/fwildclusterboot/
code
fwildclusterboot package is an R port of STATA’s boottest package. It implements the fast wild cluster bootstrap algorithm developed in Roodman et al (2019) for regression objects in R. It currently works for regression objects of type fixest from base R and the The package’s central function is boottest(). It allows the user to test univariate hypotheses using a wild cluster bootstrap. The “fast” algorithm developed in Roodman et al makes it feasible to calculate test statistics based on a large number of bootstrap draws even for large samples – as long as the number of bootstrapping clusters is not too large. fwildclusterboot package currently supports multi-dimensional clustering and one-dimensional hypotheses. It supports regression weights, multiple distributions of bootstrap weights, fixed effects, restricted (WCR) and unrestricted (WCU) bootstrap inference and subcluster bootstrapping for few treated clusters (MacKinnon & Webb, (2018)). If you are interested in the wild cluster bootstrap for IV models (Davidson & MacKinnon, 2010) or want to test multiple joint hypotheses, you can use the wildboottestjlr package, which is an R wrapper of the WildBootTests.jl Julia package. While fwildclusterboot is already quite fast (see the benchmarks below), the implementation of the wild bootstrap for OLS in WildBootTests.jl is - after compilation - orders of magnitudes faster, in particular for problems with a large number of clusters. # note: for performance reasons, the sampling of the bootstrap weights of types Rademacher, Webb and Normal within # fwildclusterboot are handled via the dqrng package, which is installed with the # package as a dependency. To set a global seed for boottest() for these weight types, use dqrng's dqset.seed() function # For Mammen weights, one can set a global seed via the set.seed() function. # set global seed for Rademacher, Webb and Normal weights library(dqrng) dqrng::dqset.seed(965326) # set a global seed for Mammen weights set.seed(23325) library(fwildclusterboot) data(voters) # fit the model via fixest::feols(), lfe::felm() or stats::lm() lm_fit <- lm(proposition_vote ~ treatment + log_income + as.factor(Q1_immigration) + as.factor(Q2_defense), data = voters) # bootstrap inference via boottest() lm_boot <- boottest(lm_fit, clustid = c("group_id1"), B = 9999, param = "treatment", seed = 1) summary(lm_boot) #> boottest.lm(object = lm_fit, clustid = c("group_id1"), param = "treatment", #> B = 9999, seed = 1) #> #> Hypothesis: 1*treatment = 0 #> Observations: 300 #> Bootstr. Iter: 9999 #> Bootstr. Type: rademacher #> Clustering: 1-way #> Confidence Sets: 95% #> Number of Clusters: 40 #> #> term estimate statistic p.value conf.low conf.high #> 1 1*treatment = 0 0.079 3.983 0 0.04 0.118 library(fixest) feols_fit <- feols(proposition_vote ~ treatment + log_income | Q1_immigration + Q2_defense, data = voters) # bootstrap inference via boottest() feols_boot <- boottest(feols_fit, clustid = c("group_id1"), B = 9999, param = "treatment", seed = 1) summary(feols_boot) #> boottest.fixest(object = feols_fit, clustid = c("group_id1"), #> param = "treatment", B = 9999, seed = 1) #> #> Hypothesis: 1*treatment = 0 #> Observations: 300 #> Bootstr. Type: rademacher #> Clustering: 1-way #> Confidence Sets: 95% #> Number of Clusters: 40 #> #> term estimate statistic p.value conf.low conf.high #> 1 1*treatment = 0 0.079 4.117 0 0.04 0.118 Results of timing benchmarks of boottest(), with a sample of N = 10000, k = 20 covariates and one cluster of dimension N_G (3 iterations each, median runtime is plotted). You can install compiled versions of fwildclusterboot from CRAN and the development version from R-universe (compiled) or github by following one of the steps below: # from CRAN install.packages("fwildclusterboot") # from r-universe (windows & mac, compiled R > 4.0 required) install.packages('fwildclusterboot', repos ='https://s3alfisc.r-universe.dev') # dev version from github # note: installation requires Rtools library(devtools) install_github("s3alfisc/fwildclusterboot")
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00597.warc.gz
CC-MAIN-2022-05
4,069
15
https://support.raynet.de/hc/en-us/articles/205916696-RPK200057-Error-Code-1919-due-to-TypeLib-settings
code
Error Code 1919 By default RayPack converts registry entries from repackaged delta file into TypeLib table where applicable. Due to an Windows Installer bug some registration may fail on install with error code 1919. Note: According to Microsoft, using TypeLib table is not recommended: Installation package authors are strongly advised against using the TypeLib table. Instead of using the TypeLib table, register type libraries by using Registry table. If an installation using the TypeLib table fails and must be rolled back, the rollback may not restore the computer to the same state that existed prior to the rollback. Currently there are two possibilities to work around this issue - Workaround 1: create the MSI from RCP again, but with Advertising settings disabled in the current profile (RayPack > Settings > Repackaging > MSI Output > Advertising). Note that this setting prevents advertised tables from being created automatically by RayPack. - Workaround 2: create the MSI from RCP again, but with disabled TypeLib setting. This can be achieved by going to the current profile (About > Troubleshooting > Open profiles folder), edit the profile and change - Support Ticket ID: 5518 - JIRA-Task: RPK-1839
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819067.85/warc/CC-MAIN-20240424045636-20240424075636-00636.warc.gz
CC-MAIN-2024-18
1,216
9
https://careers.nashtechglobal.com/job/senior-software-engineer-frontend/
code
- To write software programs from design specifications that are in compliance with established coding quality standard of the company. - Perform the code review, code refactor if required - To be trained or self-train on new technologies. - Plans, executes and document unit/integration tests - Encourage to contribute the ideas for system architecture and design decisions. - Join in all required phases from planning, estimation, designing, developing the implementation, testing, and deployment to maintenance. Senior Software Engineer – Frontend - Experienced in layout techniques and frameworks such as Bootstrap, Material - Experienced in one of modern JS frameworks/libraries such as React, Angular, Vue,… - Experienced in working with Vanilla JS, customer's libraries and frameworks - Experienced in CLI, setup project environment, running automated test using libraries such as Jest, Mocha, Chai - Experienced in web service development (SOAP, REST) - Good awareness about security and performance in web development - Proficient in code review, code refactoring, Unit Testing - Experience working in an Agile Software Development environment - Can perform the backends’ work (NodeJS, Python, Ruby, PHP) is a plus Why You'll Love Working Here 100% official salary Health and sport Football, Badminton, Rock, Yoga Technical skills, soft skills, English Premium health insurance (+1 dependent) Lunch allowance and paid holidays
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00236.warc.gz
CC-MAIN-2021-10
1,441
23
http://www.blackberryforums.com/general-9500-series-discussion-storm/164730-bb-weather.html
code
Originally Posted by Hanwei I can't get BB Weather to work on my Storm... I can enter a profile for my city... but when I click Save... it doesn't bring me back to the main page. It stays in the Options menu... and when I press the Back button... It brings me back to the main page but still gives me the Errors. I don't think the profile is saving... and I can't figure out how to get it to save :( Oh well. I tried. Just loaded and was having the same problem. I was able to drag/scroll down to the save button, then hit the Enter key on the keyboard. It saved changes and works fine. The app wasn't built for this type of device so the display's a bit off, but it's my favorite BB app by far. Someone here on the forums by the handle of tateu updated BB Weather a few months ago. Maybe he/she will take another crack at it to work with the new UI.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867424.77/warc/CC-MAIN-20180625033646-20180625053646-00224.warc.gz
CC-MAIN-2018-26
850
5