Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Code them to do that. Okay, a bit more detailed answer, taking it from two points of views. But first, a short story about AIs. The story is entitled the Chinese Room. There are two rooms, each closed off from the world save a single slot. Into the slot you can hand a slip of paper, and back out of the slot comes another slip of paper. One of these rooms lives a man who can speak Chinese. In the other I live. I, sadly, cannot speak Chinese. Fortunately for me, however, I've been given an infinitely large book covering every possible written sentence (or combination of sentences) of Chinese, and my lovely little book has a corresponding response. Google Translate has got nothing on me! Okay, so a line of Chinese speakers line up at each room, handing in slips of paper to each. The first room, the room where the man who can speak Chinese (cheater!) lives, is able to produce a correct, sensible response (in Chinese) back to each person in line. However, I won't be bested! When the people hand in a slip of paper to me, I look up the setence in my book, find the perfect response, copy it down and hand the slip of paper back. I don't know a word of Chinese yet I'm able to produce a correct, sensible response in Chinese back. Ignoring the men inside, are both rooms literate in Chinese? Both rooms can 'read' it and 'speak' it, but only one room has the understanding. Since I have no clue what I'm writing can I speak Chinese? This is a question that we debate today. The second room, the room with the book, is a computer program as we know it today; it has no concept of what it is doing, it simply does, because that's all a computer is. We have yet to figure out practically how to traverse that jump from "a bunch of logical gates that just do what we say" to "an entity with understanding of what it is doing." In fact, we don't even know how to approach that jump. What does true understanding even mean? How do we code that? It's theorized that to jump that gap we'll have to have machines fundamentally different than computers we have today; not just more advanced, but fundamentally different. So back to your quesiton, we have two approaches to the concept of AIs: AIs are simply sufficiently advanced programs with sufficiently advanced hardware "Any sufficiently advanced technology is indistinguishable from magic" was once said by a very smart man. To most (readers and writers of science fiction), this is all AIs are: sufficiently advanced where we wave our hands and exclaim "magic!" ah, erm, I mean, "science!" Except think about this for a moment; they're not magic. They're just... computers. No, really, think about this again. This sort of AI is that they're single pieces or amalgamation that humans (or other software) coded, and that's really all they are. The programs have no greater concept of morality than a calculator does. In fact your calculator has no "concept" of the numbers it's processing; it's just logical gates. It's the man in the room with the book of Chinese; it can perform the operations it was designed to do, and does so fantastically. There is no morality to code; it just stores objects in memory, stores them in long-term storage, manipulates the two, and accepts input and output. It's just bits being flipped on and off. Honestly, for this sort of AI, the answer to how to implement AI is just to code it. And do note: we're not coding morality; there's no point in being that abstract. We're either training actions (and coding the 'learning' mechanism) or directly coding what outputs it provides given inputs. It will never know that squishing a human child is "wrong," in fact it doesn't know anything, it just doesn't respond to that output given the input of its sensors, databases, etc. As to a virus... And no single piece of code could be complex and flexible enough to alter all different code to adjust it, so the concept of a virus to do this goes straight out the window. Viruses work by exploiting one, two, or maybe a few vulnerabilities and then doing their thing. They're always very targeted, very specific, and try to deal with the environment only enough to get the job done. A single virus won't be effective against Ubuntu and Windows and iOS and Android and Solaris... and they don't seek to dynamically examine and alter the code they run; the most advanced viruses humanity has ever made have also been the most focused, the most targeted. No actual, real life virus could possibly be this flexible. Now to the other possibility... AIs aren't just code; they're a fundamental revolution in artificial creation So this is the alternative to the above. The answer is (drum roll please) "we have no way of knowing." Yep, sorry about that. The most brilliant computer scientists (and other scientists and engineers in some other fields) in the world are trying to--today--theorize how to jump this gap. We don't yet know how to do this, and we certainly don't know how to build such an AI, or change their behavior after they're built. Ask this question again in 10, 30, or a 100 years and we might have an answer for you. Woah, back off the science! Okay, so you didn't tag this "hard-science" so let's pump the brakes a bit. Let's hand wave it. Pick either type of AI (just code, or something fundamentally different) describe how it's different in your setting, don't dive too deep into details, and hand wave the rest. In this case go with the virus idea; a computer virus fundamentally describes what you want, an intrusive, unwanted piece of code that bypasses security measures and alters the digital environment's behavior. You could even make this virus an AI that runs within the AI it's trying to alter, or maybe a small part of the AI (a smaller... whatever, chuck of code, I suppose) that makes callbacks to the parent AI. As long as you're willing to hand wave and not dive too deep into details, just make the explanation what sounds interesting. If you want real science though, the answer is just to code the AIs to do their job correctly. Edit: @RBarryYoung provided the somewhat spirited addition that the Chinese Room example comes from one John Searle. You can read about it further here. Thanks for the credit correct, Mr. Young.
OPCFW_CODE
#!/usr/bin/env python ''' Project: Geothon (https://github.com/MBoustani/Geothon) File: Conversion_Tools/shp_line_to_point.py Description: This code converts polygon shapfile to point shapefile. Author: Maziyar Boustani (github.com/MBoustani) ''' import os try: import ogr except ImportError: from osgeo import ogr #an example of shapefile data line_shp_file = "../static_files/shapefile/rivers_lake_centerlines/ne_50m_rivers_lake_centerlines.shp" #open line shapefile line_datasource = ogr.Open(line_shp_file) #set driver to shapefile to be able to create point shapefile file driver = ogr.GetDriverByName('ESRI Shapefile') #output point shapefile file name point_shp_file = 'points.shp' #output point shapefile file layer name layer_name = 'point_layer' #create shapefile data_source(file) if os.path.exists(point_shp_file): driver.DeleteDataSource(point_shp_file) point_datasource = driver.CreateDataSource(point_shp_file) #get number of layers of line shapefile layer_count = line_datasource.GetLayerCount() for each_layer in range(layer_count): #get one line shapefile layer = line_datasource.GetLayerByIndex(each_layer) #get line shapefile spatial reference srs = layer.GetSpatialRef() #create point shapefile layer with same spatial reference as line shapefile point_shp_layer = point_datasource.CreateLayer(layer_name, srs, ogr.wkbPoint) #get number of features of line shapefile feature_count = layer.GetFeatureCount() for each_feature in range(feature_count): #get each line feature line_feature = layer.GetFeature(each_feature) #get line feature geometry feature_geom = line_feature.GetGeometryRef() if feature_geom.GetGeometryName() != 'MULTILINESTRING': #get points data from each line feature points = feature_geom.GetPoints() for point in points: #make point geometry point_geom = ogr.Geometry(ogr.wkbPoint) #add point to point geometry point_geom.AddPoint(point[0], point[1]) #define point feature point_feature = ogr.Feature(point_shp_layer.GetLayerDefn()) #add point to point feature point_feature.SetGeometry(point_geom) #add point feature to poitn layer point_shp_layer.CreateFeature(point_feature)
STACK_EDU
I want to hire a web scraping expert. I have a excel file within 735 URL's. Scrape all products prices for each URL and sum all prices for each brand. I need only 2 columns: 1) Numbers of Products 2) Sum for All Products Prices Please check attached excel file for sample and all URL's. Note: Deadline 24 hours for this task. Hi There, As discuss, Placing my bid to scrape price and sum of products Available and ready to start. Regards Imran Bu iş için 24 freelancer ortalamada $25 teklif veriyor okay i can scrap the data from the links and place it into the excel please ping me to discuss more about . Dear Client, I have read your job description & have understood your all requirements. I am an experienced person to do this kind of job. And also I have a big team to complete it earlier. After seeing my previous Daha Fazla hello sir i have good team i am ready now for this work and give you good result so pls give me one chance.... Hello, I'm highly interested in this project.I did same types of project [login to view URL] don’t be fell hesitate to invite me. I am expert in data entry, web search, web scraping, excel, copy typing etc. Please check my Daha Fazla Hi, We are a team of professional software developers. We have expertise in Python and can readily work on your requirements. Kindly consider our proposal for the best results. Looking forward to talking for more info Daha Fazla Hi, I can do this job efficiently. I have more than 10 years of experience in this filed. I am flexible about working time and money. I believe work speaks better than words. Hope to have a positive response. Good day Dear Client I have read the job details as Web Scraping Expert Needed and is willing to work under your budget. I will scrape number of products and their price and I have checked the attached file and ready to do wo Daha Fazla Hi, After reviewing your job proposal I understood that you need to develop the website . Your job requirements are matched with my skills and I can complete this job in desired time frame. We can fix the price and ti Daha Fazla Hello! I can do that I am committed to doing your job correctly, promptly responding to any communications and submitting them on time. With Regards Mostafa Kamel Hi there, I have read your project "Web Scraping Expert Needed".I have worked on several similar projects. I do have expertise in the specified area. You can check my portfolio here:- https://www.freelancer.com/u/Pooja Daha Fazla Hi There! I will complete this project next 24 hours. I am ready to start the job for you. Do you need quick samples from me before hiring? Please let me know. Best Rakib Hello, I work at a beautiful coworking business in a big city. I’m only 19 years old and my boss set me up for the task to make a website in two weeks I finished in 1. I also can send pictures of some work I have done Daha Fazla HI. I have a fast network connection so i can scrap data fast and i am on online now so i can start you job right now.I will do your job error freely. Looking forward to hear from you. Thank you.
OPCFW_CODE
[Show abstract][Hide abstract] ABSTRACT: Large, recently-available genomic databases cover a wide range of life forms, suggesting opportunity for insights into genetic structure of biodiversity. In this study we refine our recently-described technique using indicator vectors to analyze and visualize nucleotide sequences. The indicator vector approach generates correlation matrices, dubbed Klee diagrams, which represent a novel way of assembling and viewing large genomic datasets. To explore its potential utility, here we apply the improved algorithm to a collection of almost 17,000 DNA barcode sequences covering 12 widely-separated animal taxa, demonstrating that indicator vectors for classification gave correct assignment in all 11,000 test cases. Indicator vector analysis revealed discontinuities corresponding to species- and higher-level taxonomic divisions, suggesting an efficient approach to classification of organisms from poorly-studied groups. As compared to standard distance metrics, indicator vectors preserve diagnostic character probabilities, enable automated classification of test sequences, and generate high-information density single-page displays. These results support application of indicator vectors for comparative analysis of large nucleotide data sets and raise prospect of gaining insight into broad-scale patterns in the genetic structure of biodiversity. [Show abstract][Hide abstract] ABSTRACT: Comparative DNA sequence analysis provides insight into evolution and helps construct a natural classification reflecting the Tree of Life. The growing numbers of organisms represented in DNA databases challenge tree-building techniques and the vertical hierarchical classification may obscure relationships among some groups. Approaches that can incorporate sequence data from large numbers of taxa and enable visualization of affinities across groups are desirable. Toward this end, we developed a procedure for extracting diagnostic patterns in the form of indicator vectors from DNA sequences of taxonomic groups. In the present instance the indicator vectors were derived from mitochondrial cytochrome c oxidase I (COI) sequences of those groups and further analyzed on this basis. In the first example, indicator vectors for birds, fish, and butterflies were constructed from a training set of COI sequences, then correlations with test sequences not used to construct the indicator vector were determined. In all cases, correlation with the indicator vector correctly assigned test sequences to their proper group. In the second example, this approach was explored at the species level within the bird grouping; this also gave correct assignment, suggesting the possibility of automated procedures for classification at various taxonomic levels. A false-color matrix of vector correlations displayed affinities among species consistent with higher-order taxonomy. The indicator vectors preserved DNA character information and provided quantitative measures of correlations among taxonomic groups. This method is scalable to the largest datasets envisioned in this field, provides a visually-intuitive display that captures relational affinities derived from sequence data across a diversity of life forms, and is potentially a useful complement to current tree-building techniques for studying evolutionary processes based on DNA sequence data.
OPCFW_CODE
The C++ Script allows you to extend Dewesoft in a simple way - like Visual Basic can extend Microsoft Office tools. Using intuitive abstractions in modern C++ you can develop your own custom Math modules, without ever having to leave Dewesoft. The C++ Script can process data from any amount of input channels into any amount of output channels, enabling you to simplify and generalize your Dewesoft setups. Learn more about the C++ Script: Custom Plugin Development Even though DewesoftX supports a wide variety of data interfaces out of the box, you may also add support for a custom device, sensor, or protocol yourself which is not available by default. This can be done by writing a custom plugin that uses the DCOM interface for reading and inserting values from your data source into Dewesoft. Custom plugins also allow you to create a customized GUI to fit the needs of your application within Dewesoft. The basic PRO training teaches you how to start with plugin development while the advanced PRO training shows you how to smoothen the signals, minimize the clock drift and how make your device the clock provider. Once the data from your device is available as a Dewesoft channel it can be used like any other channel, allowing you to take advantage of the Dewesoft built-in math, display, and export functionalities. Custom Visual Controls Built-in visual displays will cover most of your visualization needs for signal processing, data monitoring, and control, but Dewesoft X also offers an easy extension of visual displays using a plugin programming interface. You can program your own displays using Dewesoft’s DCOM plugin technology and your programming language of choice. While the design of your visual control is fully customizable, the general implementation is generic. This means that you can generate as many instances of the same item on your screen as you need and use them in combination with different channels of the same type - the ultimate tool for displaying data. Find examples of custom visual controls in the developers part of the download section. Custom Export Formats When you need to process data in third-party software and Dewesoft does not yet support a compatible data format, you can develop a custom export using Dewesoft’s DCOM plugin technology. The data stored in Dewesoft channels can be processed by your developed program written in the language of your choice, providing a file that is accepted by your tool for advanced analysis. As the export design is generic, a well-rounded export plugin will be able to process and correctly export data from any Dewesoft data file. Find examples of custom export plugins in the developers part of the download section. For a different method of processing Dewesoft data files please refer to the DWDataReader API. Dewesoft Visual Studio Extension We listened to your wishes for better plugin development for DewesoftX data acquisition software. No more copying files and editing the Windows registry. Developing plugins has never been easier. We have developed an easy-to-use visual studio extension for writing DewesoftX plugins. Just install the Visual Studio Dewesoft extension and the wizard will take you through steps to start developing the DewesoftX plugin. Our VC extension makes it easy to develop all kinds of plugins: - Custom Math Processing Plugins - Custom Visual Control Plugins - Custom Data Export Plugins - Custom C++ Plugin
OPCFW_CODE
What is the intended amount of encounters per session in 4E? I've bought the 4E red box starter kit and taken on the role of DM for the first 'season' of my group. Reading up on DMing and playing D&D with articles like Angry DM's Build a megadungeon, I get the feeling my group was running the game at 1/4 the 'suggested' pace. The box contains serveral encounters, let's say 8. Each session we had, we managed to complete 2 encounters at most, some sessions only 1. This made the game feel pretty slow, but I blamed it on our inexperience and the size of the group. The articles I've read gave me the impression the red box can be completed in 2 sessions. We needed 8, including checking our character sheets and playing through some of the single player designed content. My question is: What amount of encounters can be considered 'average', keeping in mind the below facts: We're all reasonably new to D&D (lots of waiting, thinking, looking up rules) Sessions are about 8 weeks apart (causing the rules and details to be slightly forgotten) The group is larger than designed with 6 players, making it take even longer due to #1 Average play time per session is around 2-3 hours Related, possible duplicate : What is the normal “pace” for a 4E session/campaign? Also, if you feel like things are going too slow, here are some possible remedies: Should I do something about slow pace in my inexperienced group? There is a large variance in "encounters per session" between each group for a variable number of reasons: Time alloted per session (Some people play 3 hours per session, some play twelve) Complexity of each combat encounter (Throwing some monsters in a blank terrain is faster than creating multiple complex interactive objects) Experience with the rules (More you play, less you need to consult rules on the fly, cutting time) Group size (small groups have less time between turns) Group composition (some classes are more direct like Rogue and Sorcerer, some like Shaman and Warlock are filled with minutiae) Group interest in "roleplaying" (some groups love to talk with each NPC and between themselves. Some like to jump from combat to combat) Etc, etc, etc... The only recommendation in the DMG (pg 121) is: The experience point numbers in the game are built so that characters complete eight to ten encounters for every level they gain. In practice, that’s six to eight encounters, one major quest, and one minor quest per character in the party. That is an average per level, but not per session. Exactly because the rhythm per session is table-specific. Since D&D 4e is build on the premise that people like tactical encounters (in contrast with other editions, who usually favor "exploration" over "combat"), it gives you a lot of tools to make combat very interesting. That's why you have a lot of combat rules. There is no problem with a combat encounter lasting one hour or two, as long as everyone is enjoying that hour-long combat. If you think the combat is going too slow for your group's interest, you can try another system that fit better your playstyle. Now, in my personal opinion, your group size is within the limits, but leaning on the too large side (I would recommend 4 for starter DMs, but the game support 4-6 a a starting number), playing for too little time (I usually recommend 4 to 6 hours per session), and with very spaced sessions (Best interval is weekly or bi-weekly). This time interval make it hard for you to keep grasp of the rules between sessions, and sometimes even forget about the game's storyline and have to be constantly reminded ("Who is this guy we are talking too? Oh, right..." Think of it like if you play a new videogame for two hours, and then only pick it up to play again in eight weeks. You have to be reminded of story and mechanic as well.) In the end, what matters is if your group had fun. RPG is in general not about how fast you reach a destination, but what you do in the voyage. As long as everyone is having fun, keep your own rhythm and enjoy the game. Great answer! Quick comment on your personal opinion: It looks like we're losing 1 player as he's more in it for the social aspect. The session length and spacing are affecting each other: We can't meet more often due to our social lives and individual locations, so when we do meet we end up chatting away the first hour or more. I try to keep everyone on the same page concerning the story by letting the players describe the last session together. Bottom line: We are having fun, some things can be changed and some can't, but we are having fun. +1, This is a great answer. I just want to add that it also doesn't hurt to switch things up. My 4E group plays every two weeks for 3 hours. Some sessions we fight the entire time, other sessions there's not a single round of combat.
STACK_EXCHANGE
# pylint: disable=C0301 ''' contents of zhenglin's year6 exercise ''' ZHENGLIN_YEAR6 = { 'title': 'Shanghai Math Project Year 6', 'user': 'Zhenglin', 'template': 'exam', 'subject': 'math', 'sections': [ { 'title': '3.1 Using letters to represent numbers (1)', 'exercises': [ r'$b+b+b+b\times b$ can be simply written as $\rule{2cm}{0.15mm}$', r'In a triangle, if $\angle 1=a^{\circ}$ and $\angle 2=b^{\circ}$, then $\angle 3=$ $\rule{2cm}{0.15mm}$.', r'In an isosceles triangle, if the base angle is $a^{\circ}$, the degree of the vertex angle is $\rule{2cm}{0.15mm}$', r'When the sum of three consecutive even number is $a$, then the number in the middle is $\rule{2cm}{0.15mm}$, the least number is $\rule{2cm}{0.15mm}$ and the gratest number is $\rule{2cm}{0.15mm}$' ] }, # a section object { 'title': '3.2 Using letters to represent numbers (2)', 'exercises': [ { 'description': r'Use expressions with letters to represent the relations between quantities.', 'exercises': [ r'The quotient of $5$ divided by $x$ plus $n$ is $\rule{2cm}{0.15mm}$', r'$320$ is substracted by $12$ times $m$: $\rule{2cm}{0.15mm}$' ] }, { 'description': r'Use expressions with letters to represent the quantities below', 'exercises': [ r'A car has travelled $t$ hours at the speed of 85 km per hour. It has travelled $\rule{2cm}{0.15mm}$ $km$ in total', r'A shirt cost $a$ pounds, and a pair of trousers costs $b$ pounds.. The total cost of buying 3 sets of these clothes is $\rule{2cm}{0.15mm}$ pounds', ] }, { 'description': r'Look at each number sequence carefully and complete its $5^{th}$, $6^{th}$, and $n^{th}$ terms.', 'exercises': [ r'0, 5, 10, 15, $\rule{2cm}{0.15mm}$, $\rule{2cm}{0.15mm}$, ... , $\rule{2cm}{0.15mm}$', r'13, 23, 33, 43, $\rule{2cm}{0.15mm}$, $\rule{2cm}{0.15mm}$, ..., $\rule{2cm}{0.15mm}$', ] }, ] }, # a section object { 'title': '3.3 Simplification and evaluation (1)', 'exercises': [ { 'description': r'Simplify the following expressions.', 'exercises': [ r'$36s-15t-24s+35t=$ $\rule{2cm}{0.15mm}$', r'$48x+75y-18x-6x$ $\rule{2cm}{0.15mm}$' ] }, { 'description': r'Fill in the blank', 'exercises': [ r'Joe has $x$ pencils. Roy has 3 more pencils than Joe. They have $\rule{2cm}{0.15mm}$ pencils altogether', r'Each pack of flour weights 10 kg. Each pack of rice weights $x$ kg. $y$ packs of flour and 5 packs of rice weight $\rule{2cm}{0.15mm}$ kg in total', r'Don, Evans and Frank each bought 4 pens at $a$ pounds each. They paid $\rule{2cm}{0.15mm}$ pounds in total for the pens. They also each bought $b$ exercise books. Each book costs 2 pounds. They paid $\rule{2cm}{0.15mm}$ pounds in total for the books.' ] }, { 'description': r'Short answer questions', 'exercises': [ r'\pounds10 can buy $3a$ kg of a fruit. Accordingly, Fiona bought $9.6a$ kg of the fruit with \pounds50. How much change did she get?', r'It took Joan $m$ hours to make 21 paper flowers. It took Marry 2 hours to make $n$ paper flowers. How many paper flowers did each of them make on average? How many paper flowers did both of them make every hour, on average?', r'The dividend is 6 times the divisor. If the divisor is $x$, what is the sum of the dividend, divisor, and quotient?' ] }, r'The length and width of a rectangle are $a$ cm and $b$ cm respectively, and $a>b$. The side length of a square equals the difference between the two sides of the rectangle. What is the sum of their perimeters?' ] }, # a section object { 'title': '3.4 Simplification and evaluation (2)', 'exercises': [ { 'description': r'To repair a section of a road, the repair team repaired $c$ m of the road every day for the first 6 days, and there were $s$ m left.', 'exercises': [ r'Use an expression to express the length of the road section: $\rule{2cm}{0.15mm}$', ] }, { 'description': r'Simplify first and then evaluate.', 'exercises': [ r'When $m=4$ and $n=1.8$, find the value of $15m+5m-18n-12n$. $\rule{2cm}{0.15mm}$', ] }, { 'description': r'A farm has cultivated sycamores and cedars. They are each in $x$ rows. Sycamores are in rows of 12 and cedars are in rows of 14 across.', 'exercises': [ r'How many sycamores and cedars has the farm clutivated in total?', r'When $x=20$, how many sycamores and cedars are there on the farm?' ] }, { 'description': r'There are $a$ pupils in a school\'s track and field team. The number of pupils in the hockey team is 4 fewer than twice the number in the track and field team.', 'exercises': [ r'Use an expression with letters to represent the total number of pupils in the two teams', r'When $a=24$, how many pupils are there in the two teams?' ] }, ] }, # a section object { 'title': '3.5 Simple equations (1)', 'exercises': [ { 'description': r'True or False', 'exercises': [ r'An expression with variables is an equation.', r'$9-3x$ is not an equation.' ] }, ] }, # a section object { 'title': '3.6 Simple equations (2)', 'exercises': [ { 'description': r'Emily was given \pounds30 in \pounds2 coins and \pounds5 notes. Let $x$ be the number of \pounds2 coins and $y$ be the number of \pounds5 nodes.', 'exercises': [ r'Establish an equation with $x$ and $y$', r'List all the possible combinations of $x$ and $y$ that satisfy the equation. (Note: $x$ and $y$ must be positive numbers)' ] }, r'A department store received 324 pairs of sports shoes packed in boxes of the same size. If 2 boxes contain 72 pairs of shoes, how many boxes did the store received?' ] }, # a section object { 'title': '3.7 Simple equations (3)', 'exercises': [ { 'description': r'Write the equation first and then find the solution', 'exercises': [ r'The sum of 4 times $x$ and 3.2 is 9.8. Find $x$.', ] }, r'Solve the equation: $7.8-x\div3=2.2$', ] }, # a section object { 'title': '3.8 Using equations to solve problems (1)', 'exercises': [ { 'description': r'Write equations and then solve the application problems.', 'exercises': [ r'The area of a rectangle is 36 cm\textsuperscript{2} . The length is 8 cm. What is the width of the rectangle?', ] }, r'Julie bought 1 bath towel and 4 hand towels for \pounds20. Given that the price of the bath towel was \pounds6, how much did each hand towel cost?' ] }, # a section object ] } # a book object
STACK_EDU
Curve fitting with custom function 82 views (last 30 days) I am trying to do a curve fitting on some experimental data with a custom function. I am more specifically trying to bring closer my function to the data. The function is the following one: Every parameters of the function, except the shear rate () , can vary in order to fit best the data. Here is a graph with the data and the function plotted with initial values: The initial values are the following ones: I already tried to use the curve fitting toolbox but i wasn't able to keep the look of the function. Does anyone know how i can do that ? Thank you for the help ! Star Strider on 17 Nov 2021 It would help to have the data. I would use the fitnlm function for this — shearRate = logspace(-3, 4, 100); % Create Data muv = 0.5-tanh(shearRate)*0.01 + randn(size(shearRate))*0.01; % Create Data shearRate = shearRate(:); muv = muv(:); % muInf=0.0035; %[Pa.s] % mu0=0.108; %[Pa.s] B0 = [0.0035; 0.108; 8.2; 0.3; 0.64]; viscCar = @(muInf,mu0,lambda,a,n,shearRate) muInf+(mu0-muInf)./(1+(lambda.*shearRate).^a).^((n-1)/a); viscCarfcn = @(b,shearRate) viscCar(b(1),b(2),b(3),b(4),b(5),shearRate); mumdl = fitnlm(shearRate,muv, viscCarfcn, B0) Beta = mumdl.Coefficients.Estimate plot(shearRate, muv, '.b') This works, and would actually make sense with the actual data. More Answers (1) Alex Sha on 18 Nov 2021 It is hard to get stable and unique result for Da125's problem, especially for parameters of "lambda" and "n". refer to the result below: Root of Mean Square Error (RMSE): 0.00032178294087402 Sum of Squared Residual: 2.89923930905092E-6 Correlation Coef. (R): 0.998376698597668 Parameter Best Estimate
OPCFW_CODE
.NET components for Office 2019, 2016, 2013, 2010 ribbons: custom menu, tabs and controls in VB.NET, C# Starting from version 2007 Office provides the Ribbon user interface. Microsoft states that the interface makes it easier and quicker for users to achieve the wanted results. You extend the Ribbon interface by using the XML markup that the COM add-in returns to the host application through an appropriate interface when your add-in is loaded into the host version supporting the Office Ribbon UI. The Add-in Express Toolbox provides about 50 components for customizing the Microsoft Office 2019, 2016, 2013, 2010 and 2007 Ribbon that undertake the task of creating the markup. Also, there are five visual designers that allow creating the UI of your add-in: Ribbon Tab (ADXRibbonTab), Ribbon Office Menu (ADXRibbonOfficeMenu), Quick Access Toolbar (ADXRibbonQuickAccessToolbar), Ribbon BackstageView (ADXBackStageView), and Ribbon Context Menu (ADXRibbonContextMenu). In Office 2010, Microsoft abandoned the Office Button (introduced in Office 2007) in favor of the File Tab (also known as Backstage View). When the add-in is being loaded in Office 2010, 2013, 2016 or 2019, ADXRibbonOfficeMenu maps your controls to the File tab unless you have an ADXBackStageView component in your add-in; in this case, all the controls you add to ADXRibbonOfficeMenu are ignored. Microsoft require developers to use the StartFromScratch parameter (see the StartFromScratch property of the add-in module) when customizing the Quick Access Toolbar. When your add-in is being loaded by the host application supporting the Ribbon UI, the very first event received by the add-in is the OnRibbonBeforeCreate event of the add-in module (in a pre-Ribbon Office application, the very first event is OnAddinInitialize). This is the only event in which you can add/remove/modify the Ribbon components onto/from/on the add-in module. Then Add-in Express generates the XML markup reflecting the settings of the components and raises the OnRibbonBeforeLoad event. In that event, you can modify the generated markup, say, by adding XML tags generating extra Ribbon controls. Finally, the markup is passed to Office and the add-in module fires the OnRibbonLoaded event. In the event parameters, you get an object of the AddinExpress.MSO.IRibbonUI type that allows invalidating a control; you call the corresponding methods when you need the Ribbon to re-draw the control. Also, in Office 2010 - 2019 only, you can call a method activating a tab. Remember, the Ribbon designers perform the XML-schema validation automatically, so from time to time you may run into the situation when you cannot add a control to some level. It is a restriction of the Ribbon XML-schema. Still, we recommend turning on the Ribbon XML validation mechanism through the UI of the host application of your add-in; you need to look for a check box named "Show add-in user interface errors", see here. All built-in Ribbon controls are identified by their IDs. While the ID of a command bar control is an integer, the ID of a built-in Ribbon control is a string. IDs of built-in Ribbon controls can be downloaded on the Microsoft web site: for Office 2007, for Office 2010 and for Office 2013. The downloads install Excel files; the Control Name column of each contains the IDs of almost all built-in Ribbon controls for the corresponding Ribbon (see the screenshot below). You may also find these files useful when dealing with changes in the Office 2013 Ribbon UI. Find more details about using them here. See also How to find the Id of a built-in Ribbon control. Add-in Express Ribbon components provide the IdMso property; if you leave it empty the component will create a custom Ribbon control. To refer to a built-in Ribbon control, you set the IdMso property of the component to the ID of the built-in Ribbon control. For instance, you can add a custom Ribbon group to a built-in tab. To do this, you add a Ribbon tab component onto the add-in module and set its IdMso to the ID of the required built-in Ribbon tab. Then you add your custom group to the tab and populate it with controls. Note that the Ribbon does not allow adding a custom control to a built-in Ribbon group. You use the Ribbon Command (ADXRibbonCommand) component to override the default action of a built-in Ribbon control. Note that the Ribbon allows intercepting only buttons, toggle buttons and check boxes; see the ActionTarget property of the component. You specify the ID of a built-in Ribbon control to be intercepted in the IdMso property of the component. To get such an ID, see Referring to built-in Ribbon controls. The Ribbon Command component allows disabling built-in Ribbon controls such as buttons, check boxes, menus, groups, etc. To achieve this you need to specify the IdMso of the corresponding Ribbon control (see Referring to Built-in Ribbon Controls), set an appropriate value to the ActionTarget property, and specify Enabled=false. Below are two examples showing how you use the Ribbon Command component to prevent the user from 1) copying the selected text and 2) changing the font size of the selected text. Every Ribbon component provides the InsertBeforeId, InsertBeforeIdMso and InsertAfterId, InsertAfterIdMso properties. You use the InsertBeforeId and InsertAfterId properties to position the control among other controls created by your add-in, just specify the Id of the corresponding components in any of these properties. The InsertBeforeIdMso and InsertAfterIdMso properties allow positioning the control among built-in Ribbon controls (see also Referring to built-in Ribbon controls). Most Ribbon controls in Office require 32x32 or 16x16 icons. A Ribbon gallery allows using 16x16, 32x32, 48x48, or 64x64 icons. Supported formats are BMP, PNG and ICO, any color depth. You specify the icon using either the ImageList, Image, ImageTransparentColor, properties or the Glyph property that the corresponding Ribbon component provides. Note that Glyph allows bypassing a bug in the ImageList component: ImageList cuts the alpha channel out. In addition, Glyph accepts a multiple-page .ICO file containing several images. If provided with such a .ICO, the Ribbon component chooses an image from the .ICO at the add-in startup as shown in the following table: The image selection mechanism only works if you choose the Project resource file option (see the screenshot below) in the Select Resource dialog box opened when you invoke the property editor for the Glyph property. Choosing the Local resource option implies that the multi-page icon logic won’t be used. You cannot create Ribbon controls at run time because Ribbon is a static thing from birth; but see How Ribbon controls are created?) The only control providing any dynamism is Dynamic Menu if the ADXRibbonMenu.Dynamic property is set to True at design time, the component will generate the OnCreate event allowing creating menu items at run time (see sample code below). For other control types, you can only imitate that dynamism by setting the Visible property of a Ribbon control. For instance, you may need to display a custom control positioned before or after some other built-in control. To achieve this, you create two controls, specify the IdMso of the built-in control in the BeforeIdMso and AfterIdMso properties of the custom controls. At run time, you just change the visibility of the custom controls so that only one of them is visible. The sample code below demonstrates creating a custom control in the OnCreate event of an ADXRibbonMenu component. For the sample to work, the Dynamic property of the component must be set to True. Note that the OnCreate event occurs whenever you open a dynamic menu. Another example creates a dynamic submenu in a menu, which is dynamic as well. Here we also demonstrate using e.Contains() to check if a control exists on the menu. Add-in Express components implement two schemas of refreshing Ribbon controls. The simple schema allows you to change a property of the Ribbon component and the component will supply it to the Ribbon UI whenever it requests that property. This mechanism is an ideal when you need to display static or almost static things such as a button caption that doesn't change or changes across all windows showing the button, say in Outlook inspectors or Word documents. This works because Add-in Express supplies the same value for the property whenever the Ribbon UI invokes a corresponding callback function. However, if you need to have a full control over the UI, say, when you need to show different captions of a Ribbon button in different Inspector windows or Word documents, you can use the PropertyChanging event provided by all Ribbon components. That event occurs when the Ribbon expects that you can supply a new value for a property of the Ribbon control: Caption, Visible, Enabled, Tooltip, etc. The event allows you to learn the current context (see Determining a Ribbon control's context), the requested property and its current value. You can change that value as required by the business logic of your add-in. A Ribbon control is shown in a certain context. For the developer, the context is either null (Nothing in VB.NET) or a COM object that you might need to release after use (according to the rule given in Releasing COM objects). You retrieve and release the context object in these ways: For a Ribbon control shown on a Ribbon tab, the context represents the window in which the Ribbon control is shown: Excel.Window, Word.Window, PowerPoint.DocumentWindow, Outlook.Inspector, Outlook.Explorer, etc. For a Ribbon control shown in a Ribbon context menu the context object may not be a window e.g. Outlook.Selection, Outlook.AttachmentSelection, etc. When debugging the add-in we recommend that you find the actual type name of the context object by using Microsoft.VisualBasic.Information.TypeName(). This requires that your project reference Microsoft.VisualBasic.dll. You start with assigning the same string value to the AddinModule.Namespace property of every add-in that will share your controls. This makes Add-in Express add two xmlns attributes to the customUI tag in the resulting XML markup: Originally, all Ribbon controls are located in the default namespace (id="%Ribbon control's id%" or idQ="default:%Ribbon control's id%") and you have full control over them via the callbacks provided by Add-in Express. When you specify the Namespace property, Add-in Express changes the markup to use idQ's instead of id's. Then, in all add-ins that are to share a control, for a container control with the same Id (you can change the Id's to match), you set the Shared property to true. For a control whose Shared property is true, Add-in Express changes its idQ to use the shared namespace (idQ="shared:%Ribbon control's id%") instead of the default one. Also, for such Ribbon controls, Add-in Express cuts out all callbacks and replaces them with "static" versions of the attributes. Say, getVisible="getVisible_CallBack" will be replaced with visible="%value%". The shareable Ribbon controls are the following container controls: When referring to a shared Ribbon control in the BeforeId and AfterId properties of another Ribbon control, you use the shared controls' idQ: %namespace abbreviation% + ":" + %control id%. The abbreviations of these namespaces are "default" and "shared" string values. Say, when creating a shared tab, containing a private group, containing a button (private again), the resulting XML markup looks as follows: You can download an example here.
OPCFW_CODE
"use strict"; Object.defineProperty(exports, "__esModule", { value: true }); const collection_utils_1 = require("collection-utils"); const TypeUtils_1 = require("../TypeUtils"); const Support_1 = require("../support/Support"); const TypeAttributes_1 = require("../TypeAttributes"); const StringTypes_1 = require("../StringTypes"); const MIN_LENGTH_FOR_ENUM = 10; const MIN_LENGTH_FOR_OVERLAP = 5; const REQUIRED_OVERLAP = 3 / 4; function isOwnEnum({ numValues, cases }) { return numValues >= MIN_LENGTH_FOR_ENUM && cases.size < Math.sqrt(numValues); } function enumCasesOverlap(newCases, existingCases, newAreSubordinate) { const smaller = newAreSubordinate ? newCases.size : Math.min(newCases.size, existingCases.size); const overlap = collection_utils_1.setIntersect(newCases, existingCases).size; return overlap >= smaller * REQUIRED_OVERLAP; } function isAlwaysEmptyString(cases) { return cases.length === 1 && cases[0] === ""; } function expandStrings(ctx, graph, inference) { const stringTypeMapping = ctx.stringTypeMapping; const allStrings = Array.from(graph.allTypesUnordered()).filter(t => t.kind === "string" && TypeUtils_1.stringTypesForType(t).isRestricted); function makeEnumInfo(t) { const stringTypes = TypeUtils_1.stringTypesForType(t); const mappedStringTypes = stringTypes.applyStringTypeMapping(stringTypeMapping); if (!mappedStringTypes.isRestricted) return undefined; const cases = Support_1.defined(mappedStringTypes.cases); if (cases.size === 0) return undefined; const numValues = collection_utils_1.iterableReduce(cases.values(), 0, (a, b) => a + b); if (inference !== "all") { const keys = Array.from(cases.keys()); if (isAlwaysEmptyString(keys)) return undefined; const someCaseIsNotNumber = collection_utils_1.iterableSome(keys, key => /^(\-|\+)?[0-9]+(\.[0-9]+)?$/.test(key) === false); if (!someCaseIsNotNumber) return undefined; } return { cases: new Set(cases.keys()), numValues }; } const enumInfos = new Map(); const enumSets = []; if (inference !== "none") { for (const t of allStrings) { const enumInfo = makeEnumInfo(t); if (enumInfo === undefined) continue; enumInfos.set(t, enumInfo); } function findOverlap(newCases, newAreSubordinate) { return enumSets.findIndex(s => enumCasesOverlap(newCases, s, newAreSubordinate)); } // First, make case sets for all the enums that stand on their own. If // we find some overlap (searching eagerly), make unions. for (const t of Array.from(enumInfos.keys())) { const enumInfo = Support_1.defined(enumInfos.get(t)); const cases = enumInfo.cases; if (inference === "all") { enumSets.push(cases); } else { if (!isOwnEnum(enumInfo)) continue; const index = findOverlap(cases, false); if (index >= 0) { // console.log( // `unifying ${JSON.stringify(Array.from(cases))} with ${JSON.stringify( // Array.from(enumSets[index]) // )}` // ); enumSets[index] = collection_utils_1.setUnion(enumSets[index], cases); } else { // console.log(`adding new ${JSON.stringify(Array.from(cases))}`); enumSets.push(cases); } } // Remove the ones we're done with. enumInfos.delete(t); } if (inference === "all") { Support_1.assert(enumInfos.size === 0); } // Now see if we can unify the rest with some a set we found in the // previous step. for (const [, enumInfo] of enumInfos.entries()) { if (enumInfo.numValues < MIN_LENGTH_FOR_OVERLAP) continue; const index = findOverlap(enumInfo.cases, true); if (index >= 0) { // console.log( // `late unifying ${JSON.stringify(Array.from(enumInfo.cases))} with ${JSON.stringify( // Array.from(enumSets[index]) // )}` // ); enumSets[index] = collection_utils_1.setUnion(enumSets[index], enumInfo.cases); } } } function replaceString(group, builder, forwardingRef) { Support_1.assert(group.size === 1); const t = Support_1.defined(collection_utils_1.iterableFirst(group)); const stringTypes = TypeUtils_1.stringTypesForType(t); const attributes = collection_utils_1.mapFilter(t.getAttributes(), a => a !== stringTypes); const mappedStringTypes = stringTypes.applyStringTypeMapping(stringTypeMapping); if (!mappedStringTypes.isRestricted) { return builder.getStringType(attributes, StringTypes_1.StringTypes.unrestricted, forwardingRef); } const types = []; const cases = Support_1.defined(mappedStringTypes.cases); if (cases.size > 0) { const keys = new Set(cases.keys()); const fullCases = enumSets.find(s => collection_utils_1.setIsSuperset(s, keys)); if (inference !== "none" && !isAlwaysEmptyString(Array.from(keys)) && fullCases !== undefined) { types.push(builder.getEnumType(TypeAttributes_1.emptyTypeAttributes, fullCases)); } else { return builder.getStringType(attributes, StringTypes_1.StringTypes.unrestricted, forwardingRef); } } types.push(...Array.from(mappedStringTypes.transformations).map(k => builder.getPrimitiveType(k))); Support_1.assert(types.length > 0, "We got an empty string type"); return builder.getUnionType(attributes, new Set(types), forwardingRef); } return graph.rewrite("expand strings", stringTypeMapping, false, allStrings.map(t => [t]), ctx.debugPrintReconstitution, replaceString); } exports.expandStrings = expandStrings;
STACK_EDU
from django.test import TestCase from django.contrib.auth import get_user_model from player.models import Player from card.models import Card from lobby.models import Room from game.models import Game from board.models import Board User = get_user_model() class BaseTestCase(TestCase): def setUp(self): self._API_BASE = 'http://127.0.0.1:8000/api/' self.USER_USERNAME = "testuser" self.USER_EMAIL = "testuser@test.com" self.USER_PASSWORD = "supersecure" user_data = { "username": self.USER_USERNAME, "email": self.USER_EMAIL, "password": self.USER_PASSWORD, } user = User._default_manager.create_user(**user_data) user.save() self.assertEqual(User.objects.count(), 1) def i_user(self, username, email, password): return User.objects.create( username=username, email=email, password=password ) def i_player(self, user, game, colour='rojito'): return Player.objects.create( user=user, colour=colour, game=game ) def i_card(self, player, card_type='road_building'): return Card.objects.create( player=player, card_type=card_type, ) def i_room(self, name, owner, players, board): new_room = Room.objects.create( name=name, owner=User.objects.get(username=owner), board=Board.objects.get(name=board) ) new_room.players.set(User.objects.filter(username__contains=players)) def i_board(self, name, owner): Board.objects.create( name=name, owner=owner ) def i_game(self): Game.objects.create()
STACK_EDU
[Openswan Users] Single interface / tunnel will not come up. bruce at secryption.com Fri Jan 31 06:27:01 EST 2014 I'm hoping someone here can point me in the right direction. I'm trying to get an ipsec vpn up from a cisco 2811 to a hosted virtual server. Shouldn't be that tough from all that I've read. Here's the setup. Obviously external ip's have been changed for security purposes.. 192.168.300/24-------220.127.116.11-- INTERNET -- 18.104.22.168 22.214.171.124 being the only interface that is available on the VPS, and it's an external address. This is my first guess as to where the problem is, but I haven't found a good example of how to deal with this. The end goal here is to push all web, and various other traffic over the vpn. See config below. NAT shouldn't be a problem since I have the cisco not natting the traffic that I want to flow through the tunnel. Right now it's just icmp traffic for testing. ip access-list extended NAT deny icmp 192.168.30.0 0.0.0.255 any Finally there seems to be some discrepancy in encryption methods between cisco/openswan. I've tried just about every combination. So far I can verify that it gets far enough to say the keys match. Then I think it actually finishes the phase 1 tunnel but I'm not exactly sure. Before I make any more changes I want to make sure I have the actual openswan config right since the network layout is a little odd. crypto isakmp policy 1 encr aes 192 crypto isakmp key ********** address 126.96.36.199 crypto ipsec transform-set IOFSET2 esp-aes 192 esp-sha-hmac crypto map IOFVPN 1 ipsec-isakmp set peer 188.8.131.52 set transform-set IOFSET2 match address 152 access-list 152 permit icmp any any # # Left security gateway, subnet behind it, nexthop # Right security gateway, subnet behind it, nexthop # To authorize this connection, but not actually start # at startup, uncomment this. Watching the logs I'm to the point now where I'm getting this from the 2d11h: IPSEC(validate_transform_proposal): no IPSEC cryptomap exists for local address 184.108.40.206 2d11h: ISAKMP:(0:2:SW:1): IPSec policy invalidated proposal 2d11h: ISAKMP:(0:2:SW:1): phase 2 SA policy not acceptable! (local 220.127.116.11 remote 18.104.22.168) and this from openswan. Jan 31 11:13:51 196-55-235-37 pluto: "IOF" #27: the peer proposed: 0.0.0.0/0:0/0 -> 0.0.0.0/0:0/0 Jan 31 11:13:51 196-55-235-37 pluto: "IOF" #27: cannot respond to IPsec SA request because no connection is known for Jan 31 11:13:51 22.214.171.124 pluto: "IOF" #27: sending encrypted notification INVALID_ID_INFORMATION to 126.96.36.199:500 I understand the why, the acl doesn't match on both sides but I'm not sure how to get around this with the openswan only having a single nic. I've tried a few different things but it fails. I'm half wondering if it's not the easies to add a sub interface on the openswan side just so it has a second network to make it happy although I'd prefer not to. I'm open to suggestions. Public key: https://www.secryption.com/BruceMarkey.asc I believe that any violation of privacy is nothing good. More information about the Users
OPCFW_CODE
Please transfer a php website into Wordpress including, creating responsive theme from existing website template, transferring content to the new website and transferring across all embedded forms. cPanel access will be provided. i want attractive side bar in my site look like given in photo so see and if youaable to make then say me for side bar serch [log masuk untuk melihat URL] in mobile and then see side bar i needed urgent for my site ...wordpress. the idea is to build a theme , with about 5 custom modules / post, some of the modules will have public registrations or forms and also the cms edition capabilities. we will provide the design , assets ( fonts , psd , images, content ). we would need you to create theme, template, html production, css, js, php , db . based on wireframes and I have a e-commerce (Deal Website) theme purchased from Envato Market, however, the customer /Vendor profile page and how it functions aren't what i need, hence i need it recreated with the full functionality of my requirement. Further details if you are up for the task. Thank You. Hi, I want PHP Framework Real Estate Site Development DB should use nosql. Because I have to put huge data in the db. about 2 million record This is My WordPress theme site should be 100% identical in functionality and design. This is my site You need to develop my site after analysis.100% identical design and functionality . You will replicate my site Language pair: English/Chinese to Mongolian Content: Christian spiritual articles, testimonies Total volume: 600k words Expected deadline: May 2019 Rate: flexible and depends upon quality, turnaround time and service. Also, we are open to discussion. Potential workload: We will send assignments according to your output capacity and schedule Requirements: 1. Native speaker of target language w... ...tech company. Responsive theme design + dummy pages of content managed by wordpress admin. Contact us form with gmail integration. Bilingual support. Free version of wordpress on LAMP, php 7. Minimal code customization, realize by installing reasonable number of plugins. Full customization guidelines and codes are required. theme should be easy to maintain We are using the AIT Directory+ theme that we have used to create a child theme to customise. Most of the work has been done but we require assistance with the following: 1. Customising the order in which custom post types (items) are displayed on the search results page. The order has already been ordered in accordance with an ACF field but a random ...drag/drop plugin and gets weird if you do drag items because children don't follow. 2) Output/utilization I have a php script that I've written that first goes through the general navbar/menu parameters, adding relevant [log masuk untuk melihat URL] file to the page, then adding other navbar parameters like fix top, justified, etc, ul left or right if split Hello, I currently have a website built with Laravel, which is a PHP Framework. In order to have more flexibility and not depend of developers all the time, I am thinking about moving to Wordpress. So I need an Wordpress expert to create my platform in Wordpress and move all data. Basically, my current platform (built with Laravel) has: 1. Blog Hi there, I'd like to build a directory portal featuring business directory, classifieds, events, blog/news, marketplace, and coupons/deals. Bespoke, Wordpress theme, PHP script or any other software/platform can be used. Reply and share your idea/offer if you have done similar projects before. Thanks ...child theme * All links, scripts, resources, etc. must utilize HTTPS and not HTTP * Image names must be proper for SEO (no random naming convention); we'll provide most images with the proper names * Do not hard code anything in to the theme unless specifically asked (i.e. use the capabilities of the template and plugins instead of creating PHP and saving We have a web page which is based on custome wordpress CMS however due to constant update s it crashes, we recreate this wordpress theme for our customers so custome php installation should be done to be easy duplicated for our customers on their servers responsive have a proper admin panel for easier modification of front end and back-end, Looking for a designer to show their creativity in designing Corporate Business Themes, brands and Logos, and I will try them in the task below. I will have many more tasks in the future, so if I like your work, it will be a long-term business relationship. Business name: Cactily Visual Concepts Type of business: Trade Show Exhibit Design Firm The name ( Cactily) is derivative from (Cactus) Pre... ...need a child theme to integrate all custom changes. A simple installation failed because for whatever reason it changed the menu fonts and a few details in the childtheme. 2-3 people without sufficient coding skills looked into it but weren't able to solve the issue. WP runs with Thrive Architect and the Thrive Theme "Squared". The Theme Devleoper doesn... So we have a news section of our website, but we need to make adjustments to the theme so it looks presentable to our customers Most of the design is already in place, but some formatting errors are there. Budget is flexible, timescale is asap
OPCFW_CODE
Applying an lm function to different ranges of data and separate groups using data.table How do I perform a linear regression using different intervals for data in different groups in a data.table? I am currently doing this using plyr but with large data sets it gets very slow. Any help to speed up the process is greatly appreciated. I have a data table which contains 10 counts of CO2 measurements over 10 days, for 10 plots and 3 fences. Different days fall into different time periods, as described below. I would like to perform a linear regression to determine the rate of change of CO2 for each fence, plot and day combination using a different interval of counts during each period. Period 1 should regress CO2 during counts 1-5, period 2 using 1-7 and period 3 using 1-9. CO2 <- rep((runif(10, 350,359)), 300) # 10 days, 10 plots, 3 fences count <- rep((1:10), 300) # 10 days, 10 plots, 3 fences DOY <-rep(rep(152:161, each=10),30) # 10 measurements/day, 10 plots, 3 fences fence <- rep(1:3, each=1000) # 10 days, 10 measurements, 10 plots plot <- rep(rep(1:10, each=100),3) # 10 days, 10 measurements, 3 fences flux <- as.data.frame(cbind(CO2, count, DOY, fence, plot)) flux$period <- ifelse(flux$DOY <= 155, 1, ifelse(flux$DOY > 155 & flux$DOY < 158, 2, 3)) flux <- as.data.table(flux) I expect an output which gives me the R2 fit and slope of the line for each plot, fence and DOY. The data I have provided is a small subsample, my real data has 1*10^6 rows. The following works, but is slow: model <- function(df) {lm(CO2 ~ count, data = subset(df, ifelse(df$period == 1,count>1 &count<5, ifelse(df$period == 2,count>1 & count<7,count>1 & count<9))))} model_flux <- dlply(flux, .(fence, plot, DOY), model) rsq <- function(x) summary(x)$r.squared coefs_flux <- ldply(model_flux, function(x) c(coef(x), rsquare = rsq(x))) names(coefs_flux)[1:5] <- c("fence", "plot", "DOY", "intercept", "slope") Here is a "data.table" way to do this: library(data.table) flux <- as.data.table(flux) setkey(flux,count) flux[,include:=(period==1 & count %in% 2:4) | (period==2 & count %in% 2:6) | (period==3 & count %in% 2:8)] flux.subset <- flux[(include),] setkey(flux.subset,fence,plot,DOY) model <- function(df) { fit <- lm(CO2 ~ count, data = df) return(list(intercept=coef(fit)[1], slope=coef(fit)[2], rsquare=summary(fit)$r.squared)) } coefs_flux <- flux.subset[,model(.SD),by="fence,plot,DOY"] Unless I'm missing something, the subsetting you do in each call to model(...) is unnecessary. You can segment the counts by period in one step at the beginning. This code yields the same results as yours, except that dlply(...) returns a data frame and this code produces a data table. It isn't much faster on this test dataset. Thank you. This is so much faster on the full data set and works beautifully. The only reason to include subsetting in each call to model(...) is that the I am using the model repeatedly on several data sets. It saves me having to subset each data set individually.
STACK_EXCHANGE
Maybe it's just an infatuation of mine, or maybe it's just because i'm a guy and "BIGGER! BIGGER! BIGGER!" has been burned into my brain, but storage is more interesting than much of the other aspects of my job. There are many ways to handle storage/backup, as there are many different situations, and again i'm probably just geeky but on a few occasions i've been caught daydreaming about another solution that I'd like to implement (Oooooh 14TB NAS...). Which brings me to my question. How do you handle storage (and how much?), and it's backup? How much redundancy is built into your system? How many forms of backup do you keep? Do you armed guards at the door to your storage facility? Do they wear kilts? And lastly, which will put it all into perspective, what environment are you in that justifies your current solution? What is your user base like? To start things off: I'm at a university in Southeast Alaska. Currently, we have a very small amount of storage (300gb maybe, max?). Amazingly, that not only holds most of the common data, but the information from every user's My Documents folder. To backup, currently our only solution is doing a revolving tape backup. We have two SAN's sitting on a shelf ready to be deployed, (4TB total I think, will be 2TB redundant) but who knows when that will get finished. How we are going to back up all that hasn't been finalized yet, but my boss and I agree that tape isn't the only answer. We have been looking at buying an array of hot-swappable SATA drives and keeping another server that mirrors everything, and rotating the drives, as well as keeping off-site tape backups once a week/month. As for mobile users................. email is the win? Tapes going offsite once a week. Unfortunately we've had a big surge in the backup size as we're doing a lot more multimedia stuff. It used to be everything fit on 1 tape (Sony AIT - 100GB), now I've had to split the backup onto 3 tapes so I don't have to swap tapes in the morning. I'm at a small association. 35 users, total backup (critical data only) is around 200GB. We want to go to online backup but the pricing (for a good service) is still beyond our budget. Failing a 'real' online backup service, I'd like to do something with Symantec NetBackup (or any deduplication backup software) and then replicate the backup data to our offsite webserver.
OPCFW_CODE
""" Intuit Challenge - Building Relationships Author: Pavan Bhat (pxb8715.rit.edu) """ # All imports here import os import csv from sklearn.neighbors import KNeighborsClassifier, NearestNeighbors import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import numpy as npy class clean_and_prep: ''' This class is used to perform classification of different individuals into groups that can be used to group them based on their lifestyle and purchasing power. ''' # Variables: # Unique list of users - authorization id auth_id = [] # List of (raw) personal transport expenditure personal_transport_expenses = [] # Cumulative local transportation expenses transportation_expenses = [] # List of (raw) personal income after basic needs and commitments personal_income = [] # List of specific income sources and basic cumulative deductions purchasing_income = [] # Create color maps cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) # Epochs for cleaning and preparation of data: def data_preparation(self): ''' This function is used to perform different preparation required before the data can be classified for building relationships. :return: None ''' for n in range(100): # Accessing the mint financial transaction data mint_financial_transaction_file = open('user-' + str(n) + '.csv') csv_file = csv.reader(mint_financial_transaction_file) # Iterating through each of the file for i in csv_file: # Updating a list of users with unique authorization id if i[0] not in self.auth_id and i[0].isdigit(): self.auth_id.append(i[0]) # Updating a list of personal user transportation expenses if 'Transportation' in i[2]: if 'Bus' in i[2]: self.personal_transport_expenses.append([i[0], i[3], "Bus"]) elif 'Train' in i[2]: self.personal_transport_expenses.append([i[0], i[3], "Train"]) else: self.personal_transport_expenses.append([i[0], i[3], "Public"]) elif 'Uber' in i[2]: self.personal_transport_expenses.append([i[0], i[3], "Uber"]) elif 'Lyft' in i[2]: self.personal_transport_expenses.append([i[0], i[3], "Lyft"]) elif 'Taxi' in i[2]: self.personal_transport_expenses.append([i[0], i[3], "Taxi"]) # Updating a list of income after basic needs and commitments if 'Rent' in i[2]: self.personal_income.append([i[0], i[3], "Rent"]) elif 'Gas' in i[2]: self.personal_income.append([i[0], i[3], "Gas"]) elif 'Water' in i[2]: self.personal_income.append([i[0], i[3], "Water"]) elif 'Paycheck' in i[2]: self.personal_income.append([i[0], i[3], "Paycheck"]) elif 'Loans' in i[2]: self.personal_income.append([i[0], i[3], "Loans"]) # Closing the mint financial transactions file mint_financial_transaction_file.close() def get_purchasing_power(self): ''' Calculation of income and thereby the purchasing power of the individual after basic expenditure split based on sub-categories :return: A list of total purchasing power of each individual ''' for key, value, type in self.personal_income: # Flag that provides a check to whether the data is already present or not check = True for i in self.purchasing_income: if key == i[0]: if type == i[2]: i[1] += float(value) check = False break if check: self.purchasing_income.append([key, float(value), type]) # print(self.purchasing_income) # Calculation of Total purchasing power of each individual total_pp = [] for i in self.purchasing_income: # Flag that provides a check to whether the data is already present or not check = True for j in total_pp: if i[0] == j[0]: j[1] += float(i[1]) check = False break if check: total_pp.append([i[0], float(i[1])]) # Classifying data based on status final_pp = [] for k in total_pp: if k[1] >= 10000: final_pp.append([float(k[0])/1000, k[1]/1000, "1"]) elif k[1] < 10000 and k[1] > 0: final_pp.append([float(k[0])/1000, k[1]/1000, "2"]) elif k[1] > -10000 and k[1] <= 0: final_pp.append([float(k[0])/1000, k[1]/1000, "3"]) elif k[1] <= -10000: final_pp.append([float(k[0])/1000, k[1]/1000, "4"]) self.makepp(final_pp) return final_pp def makepp(self, tot_pp): ''' This function is used to write the final purchasing power of different individuals to a new .csv file called "purchasing_power.csv". :return: None ''' filename = 'purchasing_power.csv' # open a csv file to write the purchasing power of each individual write_pp = open(filename, "w") write_row = csv.writer(write_pp, delimiter=',', quoting=csv.QUOTE_MINIMAL, lineterminator='\n') # write_row.writerow(["auth_id", "purchasing_power", "class"]) for j in tot_pp: write_row.writerow(j) self.display_pp(tot_pp) def display_pp(self, tot_pp): ''' This function is used to display the final purchasing power of different individuals with a scatter plot to estimate the distances in relationship of different individuals. :return: None ''' x = [] y = [] for i in tot_pp: x.append(float(i[0])) y.append(float(i[1] / 1000)) plt.scatter(x, y) plt.show() def perform_classification(self, final_list): ''' This function performs classification of different individuals into groups which can get to together to form a likeable relationship based on the traits of different individuals. :return: None ''' x = [] y = [] h = .02 # step size in the mesh target =[1,2] for i in final_list: x.append(float(i[0])) y.append(float(i[1] / 1000)) neighbor_array = npy.array([x, y]) # clf = NearestNeighbors(n_neighbors=2, algorithm='ball_tree') clf = KNeighborsClassifier(n_neighbors=2, algorithm='auto') output = clf.fit(neighbor_array, target) distances, indices = output.kneighbors(neighbor_array) # xx, yy = output.kneighbors_graph(neighbor_array).toarray() ## print([xx, yy]) # plt.figure() ## plt.plot(xx, yy) # plt.pcolormesh(xx, yy, target, cmap=self.cmap_light) # plt.show() # Z = clf.predict(npy.c_[xx.ravel(), yy.ravel()]) # x_min, x_max = neighbor_array[:, 0].min() - 1, neighbor_array[:, 0].max() + 1 # y_min, y_max = neighbor_array[:, 1].min() - 1, neighbor_array[:, 1].max() + 1 # xx, yy = npy.meshgrid(npy.arange(x_min, x_max, h), npy.arange(y_min, y_max, h)) # Z = clf.predict(npy.c_[xx.ravel(), yy.ravel()]) # Z = Z.reshape(xx.shape) # plt.figure() # plt.pcolormesh(xx, yy, Z, cmap=self.cmap_light) # plt.show() def get_split_transport_expenses(self): ''' Calculation of transportation data for all users split based on sub-categories :return: None ''' for key, value, type in self.personal_transport_expenses: # Flag that provides a check to whether the data is already present or not check = True for i in self.transportation_expenses: if key == i[0]: if type == i[2]: i[1] += float(value) check = False break if check: self.transportation_expenses.append([key, float(value), type]) # print(self.transportation_expenses) def main(): ''' The main program that executes the entire script :return: None ''' cp = clean_and_prep() cp.data_preparation() cp.get_split_transport_expenses() final_pp = cp.get_purchasing_power() cp.perform_classification(final_pp) if __name__ == '__main__': main()
STACK_EDU
Hello! I’m Currin Berdine, certainly one of Coursera’s longest-tenured staff, and I’m delighted to rejoice our tenth birthday with you. Since 2014, I’ve witnessed our crew’s fantastic accomplishments: from providing masses of lessons to 1000’s, increasing our companions to incorporate trade giants and Traditionally Black Faculties, profitable prestigious awards, passing 194 million enrollments, and rising our crew from dozens in a tiny place of job in Mountain View, California to over 1000 staff hooked up around the globe. Such a lot of portions of Coursera stay steadfast. First resides our rules to at all times be told and develop, as instilled in us regularly through our founders Daphne Koller and Andrew Ng. Within the final 8 years, I went from being the crew’s first advertising rent to being the selling crew’s first full-time internet and e-mail developer. My managers have at all times inspired me to develop and so they’ve depended on me. I took lessons on Coursera to construct my abilities, finding out along the folk I serve. Any other a part of Coursera that is still unchanged is—as though through some magnetic power—our talent to draw probably the most being concerned and gifted teammates. Each day, I’m inspired through my colleagues’ gorgeous paintings, spectacular smarts, center, and authentic kindness for each and every different. To any Courserian studying this: you’re superb, and I like operating with you! Crucial worth that has remained a bedrock: the guideline “Rookies First.” I be mindful, in my first couple of weeks at Coursera, I used to be in a gathering that changed into a heated debate over a product characteristic. Concepts and critiques flew: “What about this fashion?” and “Hm, I really like this different manner.” “Other folks, let’s ask,” a supervisor paused the dialog, “which manner supplies the most productive learner enjoy? Whichever manner is ‘Rookies First’ is the solution.” Since then, I’ve heard “Rookies First” win the talk numerous occasions, and it nonetheless at all times issues us to the appropriate solution. I’m extremely proud and thankful to paintings at Coursera, and I stay up for the years forward, as we proceed to satisfy our venture to ship the arena’s absolute best schooling to everybody. Now, a glance again at the early days of Coursera! My first reliable picture at Coursera, taken in December 2013, with managers Lila Ibrahim and Yin Lu. They invited me to the vacation birthday party prior to I’d even began; I be mindful how extremely heat, type, and inviting they have been. I knew Coursera will be the proper have compatibility! Conferences at our first place of job regularly ended up in some nook, at the ground, as a result of we best had a couple of convention rooms to be had in our small startup area. Staff picture from the final day at our first place of job. A bittersweet good-bye, however we have been very a lot taking a look ahead to facilities like quite a lot of convention rooms. First day on the new place of job–building used to be nonetheless in development. The development used to be so giant that I’d regularly get misplaced discovering the assembly rooms. When we discovered the appropriate convention room in our large new place of job, the crew (monkey incorporated) would get to paintings brainstorming and prioritizing how we’d make schooling available to everybody on this planet. Should you neglected a gathering (perhaps as a result of you were given misplaced within the development), we’d practice up with the notes. Whilst our “generation” used to be humble, we had giant objectives! Running at Coursera used to be (and nonetheless is) such a lot a laugh and energizing. This image is in entrance of the “Learner Wall,” a large picture collage of our rookies. I will be able to’t recall the subject of this assembly … however I’m certain it used to be essential, no matter it used to be! This used to be certainly one of my favourite crew actions. We have been each and every given a white canvas, and on each and every canvas, there used to be a pencil define of a portion of Coursera’s authentic infinity image brand. Our directions have been to color the canvas, which is what we began doing. After a pair mins, we have been stunned with an instruction to shift to a neighbor’s canvas! This procedure endured till we’d all labored on each and every different’s items, at which level, we put all of them in combination to create a brand new and collaborative expression of our brand! It used to be a big accomplishment when the selling crew grew to twelve other folks. Now, we’re masses sturdy! This used to be again once we may just have compatibility all of the corporate into one close-up picture. If we took a photograph of the corporate lately, the photographer would want to transfer again slightly to slot in over 1,000 other folks. Glad Birthday, Coursera!
OPCFW_CODE
How do I get Busybox Linux/ESXi 4U1 to recognized expanded RAID 5 disk array? I'm in the process of upgrading a pair of physical servers to 3 virtual machines. The 2 physical boxes (HP DL380 G5 with Smart Array P400 disk controller) will run VMware ESXi 4 update 1. Each physical machine has 8 72GB drives originally configured as follows: Disk 1 and 2, RAID 1 (mirror), operating system Disks 3 - 8, RAID 5 (striped), data After doing a physical to virtual conversion/migration of one of the physical machines to a temporary VM on another existing ESX host I pulled the mirror set (OS) disks out to preserve the original OS until I was comfortable with the temporary VM. 6 disks remained and were reconfigured as a new RAID 5 array. I then rebuilt the box as a new ESX host and created virtual machines. Later, I decided to add the 2 saved disks back into the system. After physically adding, I used the Array Configuration Utility (ACU) to add the 2 disks to the array, then expanded and extended it. However, the ESX environment will not recognize the expanded disk. The datastore properties dialog shows the local physical disk capacity as 473 GB, but the datastore shows a capacity of 336 GB. When I click the "Increase..." button, there are no paths to go down since there are no new, unallocated LUNs from which to choose. I rebooted the system several times during these steps. Having struggled with some other issues, I decided to start from scratch. Using the ACU, I deleted the logical volume and started the host installation again. When prompted for the volume to which ESX would be installed, it showed the single volume with the expected full disk size (~470 GB). I chose that and installed. After starting the ESX host, I looked at the datastore and saw that it is still only 336 GB. However, like before, the datastore properties shows the physical file system having a capacity of 473 GB. Furthermore, and most surprising, when I browse the datastore all the files (virtual disks and such) that existed before I supposedly deleted and recreated the logical volume are still there as if nothing had happened. I'm sure the datastore size is not limited by ESX since I have another system running version 4 that has a logical volume of around 410 GB. Does anyone know what I'm doing wrong or if I have to do something specific within the Linux environment to get it to recognize the larger volume? Basically you shouldn't have used the two 'new' disks to expand the existing logical drive, VMFS3 doesn't usually work that way, given you have a (pretty capable) P400 adapter you should have just added them new disks to the existing array but then created a new logical disk from the unused space, still with R5 (I know that sounds mental to have what sounds like a 2 disk R5 disk but it's aggregated across the array), this new logical drive could then be added to the datastore as a second extent. As it stands now you have a 470GB disk with a single 336GB VMFS3 partition, personally I'd be tempted to copy off the VMs temporarily and rebuild from scratch, pretty sure you can't shrink back the logical drive, I wouldn't try it anyway. edit - oh and I just spotted another of your posts on another question, serverfault's not a forum, don't 'answer' with a comment, use (when you can) the comment option, I've converted it to a comment but normally they get deleted ok :) I would have, but commenting wasn't available on that post. Probably not enough rep. I just signed up. I suspected that was the answer I'd see after doing a bit more reading. However, it doesn't explain why the reinstall on the recreated volume didn't see the new disk size. I'm having problems copying the VMs off the machine so I was just going to start over entirely. Not much lost work fortunately. By default the installer leaves VMFS partitions intact with no changes at all. Ah, ok. So I should go into the ACU and blow away the whole array and just start clean. I finally got one of the VM files downloaded. Will try the others before I commit to toasting the whole thing. Thanks. Yeah, basically, I like to boot my ESXi boxes from the SmartStart CD .ISO and use the proper full GUI ACU rather than the ORCA firmware interface - there are more options. The final outcome of this was that I started over completely. I needed to move one of the VMs I created off to another host, so I just downloaded all the VM files. Then I was able to start from scratch. As @Chopper3 suggested, the installer left the VMFS partitions in place. First I tried just a simple reinstall and all the data remained. Then I deleted the (now expanded) array and recreated it. This apparently didn't change anything and the volumes and partitions still remained. Finally, I powered down, swapped all the disks around, used the HP utility to "Erase Disks". Then I was able to create the new array and volume. The ESXi installer created the new partitions using all of the available space. Honestly, I've never found it so hard to destroy data. Usually my systems do that for me.
STACK_EXCHANGE
Molecule login crash on iTerm2 resize window When using moelcule login to ssh into a test instance, any action to resize the shell window in iTerm2 results in the error below and exits the test instance. Not a big deal, just a minor inconvenience. Please let me know if there is anything I can do to help. vagrant@test-logstash1:~$ Traceback (most recent call last): File "/usr/local/bin/molecule", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python2.7/site-packages/molecule/cli.py", line 68, in main CLI().main() File "/usr/local/lib/python2.7/site-packages/molecule/cli.py", line 64, in main util.sysexit(c.execute()[0]) File "/usr/local/lib/python2.7/site-packages/molecule/command/login.py", line 114, in execute self.molecule._pt.interact() File "/usr/local/lib/python2.7/site-packages/pexpect/pty_spawn.py", line 745, in interact self.__interact_copy(escape_character, input_filter, output_filter) File "/usr/local/lib/python2.7/site-packages/pexpect/pty_spawn.py", line 770, in __interact_copy r, w, e = select_ignore_interrupts([self.child_fd, self.STDIN_FILENO], [], []) File "/usr/local/lib/python2.7/site-packages/pexpect/utils.py", line 138, in select_ignore_interrupts return select.select(iwtd, owtd, ewtd, timeout) File "/usr/local/lib/python2.7/site-packages/molecule/command/login.py", line 124, in _sigwinch_passthrough self._pt.setwinsize(a[0], a[1]) AttributeError: 'Login' object has no attribute '_pt' +1 I know this is an issue, @abrown-sg brought this up a while back. However, I cannot duplicate it. Can you tell me what to do? I have fired up an instance, logged in via molecule login and am resizing my iterm2 window. I will make a Video Am 08.10.2016 um 05:22 schrieb John Dewey<EMAIL_ADDRESS>I know this is an issue, @abrown-sg brought this up a while back. However, I cannot duplicate it. Can you tell me what to do? I have fired up an instance, logged in via molecule login and am resizing my iterm2 window. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread. Sweet 👌 hey retr0h, here it is: https://dl.dropboxusercontent.com/u/16675326/molecule_crash.mov If you need to know more about my shell: https://github.com/lxhunter/dotfiles otherwise shout ;) Thanks! hi @lxhunter the video doesn't really show how you resized the window. Do you type a key combo? When I drag the window with my mouse, I cannot reproduce. Any additional info you have would be helpful. Hey John, the "pill" on the lower right side lights up a little as I drag with the mouse. I can redo it tomorrow. I use iTerm 2 if that help. Will look for further info. Cheers Hi @retr0h - apologies, I just upgraded to 1.12.4 (from 1.11.4) and its fixed. @chrisb3ll @lxhunter Looks like this was fixed inadvertently during a coverage test PR. @lxhunter can you confirm against 1.12 series? confirmed! Thanks!!!
GITHUB_ARCHIVE
A lot of API are documented using Swagger, it’s a good thing that API are documented for us developer for understanding how they work and how to call them. In this article, I will try my best to help you use this documentation in order to generate a client that will call those API. In order to follow this tutorial, you will need a REST API, so you can : Having another API documented with openApi 2 ready The easy part of this tutorial is to import the json documentation of the api you want the first one to call an put it in your ressource folder. In my exemple I will use my clone API and I have created a Jedi one, so in order for my clone to retrieve the jedi, i will import the jedi documentation file for then generate it’s client To generate the client we will use the openApi codegen maven pluggin. To do that we need to update the build part of our pom.xml and add the pluggin. So let’s see what we have here : - the goal generate indicate us that the client will be generated during the … generate phase of our build, or can be generated using the maven command mvn generate. The configuration will hold some option used to customized the generated client like : the inputSpec that tell the pluggin where to find your imported file the generatorName for the case sourceFolder will be were my client will be generated in my ressource folder You can add some more options, I recommend you to read the README.md of the project to have a better understanding of the possible options and choose those that applies to your case. Now that we have implemented the pluggin usage, let’s go to clean install our project. If all goes right you should have in your target folder, something that look like that : As you can see, at the path described in the pluggin configuration I have my controller JediControllerApi who use ApiClient to call the jedi API and the model described in my documentation Jedi that are generated. With that I can directly use this object in my project without having to create my own and then do some mapping. If you have carefully read the step 1, in my documentation file, the server url was my localhost using port 8082, so in my ApiClient i will also have a field (basepath) that will have this value private String basePath = “http://localhost:8082"; By default the generated client will use Okhttp to make the api call, so you may have to add some more dependencies to your pom.xml in order to run your app. Now that we have gone all the way through the generation of the client, it will be best to use it don’t you think ? Nothing really hard here, just declare your generated controller and call it to reach the second API JediControllerApi jediControllerApi = **new **JediControllerApi(); **return **jediControllerApi.getAllJedi(); Thanks for your reading time, as previously, the code used in this tutorial is findable in this Github repository, branch openApiCodeGen.
OPCFW_CODE
Returning Focus (Position) In Panel After PostBack? Mar 8, 2010 VWD 2008 Express. Visual Basic. I have a gridview control within a panel control. The gridview can contain up to 128 rows. The panel is 300px high and has a vertical scroll bar that allows me to scroll to the row I want to see or edit in my gridview. When I click the "Edit" button on an item within my gridview, the page posts back and returns to panel scrolled all the way back to the top. I have to then scroll back down to get to the item I want to edit in my gridview (which has correctly been placed in edit mode). How can I make the panel return to its postion (or stay in its postion) after a postback without me having to manually scroll back down? I have a panel inside an update panel. The panel has a scrollbar. When a control inside the panel is clicked, the scrollbar resets, scrolling the panel back to the top. Is there a simple way to preserve the scrolling position of a panel inside and update panel when a postback happens from inside that panel? ScrollBar position resets on postback.Is there anyway to maintain position ?I am adding UserControl(s) dynmically to the page, therefore I want to maintian positon (always maintain position at the end)Following is my .aspx file (Just to show where is the asp:panel and User controls) I have a gridview that putted in ASP.NET Panel. both of panel and Gridview are in an UpdatePanel. there is a column in gridview that Causes Partial PostBacks. i want to Maintain Panel Scroll position on those postbacks. I have four controls in a page with update panel.Initially mouse focus is set to first control.when I partially post back the page to server the focus automatically moves to first control from the last focused control from the control I have tabbed down to.Is there any way to maintain the last focus? I have a page that I am adding user controls to the bottom of the controls each postback. The User Control has a textbox in it and the focus needs to be on this newly created control textbox each time. It all works almost perfectly however when there are too many controls to fit on the page, because I set the focus to the textbox the bottom of the page is set to the textbox that has focus not the very bottom of the page. I have a submit button below this which ends up below the page limit. How can I set focus to a textbox but still scroll to the every bottom of the page to show the submit button. I got a problem with an Ajax form in MVC2 (VS 2010).Well I got an Index.aspx that has a Ajax.BeginForm, with a textbox and a input button (Button 1). The HttpPost of this simple form, will be handled by an action of my controller. This action will render a PartialView.The PartialView has a table that I fill with a ViewModel. Also it has another Ajax.BeginForm and another input button (Button 2). This new Ajax.BeginForm is handled by an action that has to do something with the data posted. Here's the thing: I click the Button 1, fill the table and everything is going well, but after that when I click everywhere in the page, the Button 2 change it's position to the bottom of the page and get the focus ... I don't know why ... I am a designer prototyping applications. At the moment I want to show in my prototype that a drag panel opens and it belongs to a tab. Two questions: How can I make a drag panel belong to a tab? Should that be too complicated code wise (i am a designer - not a developer...): Can I position the drag panel on opening for the first time?So far, my drag panel always opens in the top left corner in my browser. I tried positioning it with CSS but it just ignores that. I have a simple question. I need to keep my position af postback in a scrollbar, where a panel consists of 20 equal user controls. I have attached the code below for the simple form and for the user control. I have added MaintainScrollPositionOnPostback="true" in the webform, but still the position moves back to the top position of the panel after postback. I have some problems with maintaining scroll position after postback. First time I experienced the problem was when I (believe) added Combobox control from AJAX control Toolkit and/or UpdatePanel from AJAX Extensions. The problem is when I do the postback on the page the page is loaded at the top and not where I did the postback. Actually, this wouldn't be a problem if it isn't happening on a very large form. I have already tried using MaintainScrollPositionOnPostback="true", but it wasn't helpful at all. I can provide the code if needed, but I don't think it would be of any use because I have comboboxes inside update panels which are rebinded on a button click. When an asynchronous postback happened inside update panel, another postback happens also for MasterPagenot only update panel embedded page .I want to prevent this MasterPage postback . is this possible ?think like i have a MasterPage and another page which is test.aspx which is content page of MasterPagei have update panel at test.aspxwhen asynchronous postback happens at this test.aspx update panel it also loads MasterPage Page_Loadi want to prevent this (it should not also load MasterPage Page_Load) Dim iCounter as Integer Dim iQuantity as Integer = 10 Protected Sub btnFoo_Click Handles btnFoo Yadda For i as Integer = iCounter to iQuantity - 1 //do something with AsyncPostBackTrigger until iCounter = iQuantity - 1 //then trigger a full postback Next End Sub I am new to the concept and feel like there must be something really easy that I am missing. I have a web page in which i have some validation after textbox lostfocus but i want that If Error occurs then text box changes their position and as soon as user corret that error text box set it on its original position how can i do it I have a web page that regularly refreshes on post back, and all of this works fine. However, a user has made enhance the page, which I need with: 1) There are 3 asp.panels on the page, which are scrollable vertically. When the page refreshes, the scroll position returns to the top. The enhancement is to maintain the position of where ever the scrollbar is on postback. How do I keep the scrollbar position on post back? I have a page that uses AJAX updatepanels. On this page, we have some radio buttons with Autopostback set to true. The problem is that after the postback, the control was losing focus so that when the user would hit tab, control would be restored to the first control on the page and not the drop-down which fired the event. As a fix, I wrote some set focus code in the radio button's oncheckchanged event. This seems to have fixed the problem with the focus. The problem I have is that the browser loses its scroll position every time I click on one of these radiobuttons. Is there a way to maintain scroll position? Maybe there is another way to resolve my original problem of setting focus that will prevent this from happening.
OPCFW_CODE
What determines if a contract is valid? The basic elements required for the agreement to be a legally enforceable contract are: mutual assent, expressed by a valid offer and acceptance; adequate consideration; capacity; and legality. Is a contract valid if it is illegal? Technically, a contract or agreement that is deemed illegal will not be considered a contract at all and thus, a court will not enforce it. Instead, illegal contracts are said to be void or unenforceable, meaning it will be as if the contract never existed. What makes a contract valid and what makes it valid? For a contract to be valid, it must have four key elements: agreement, capacity, consideration, and intention. Keep these elements in mind to ensure that your agreements are always protected. Agreement When is a contract void in the US? If those elements are not present, then the contract is void, even if both parties signed it. A contract is enforceable by law if it has these required elements: Offer and acceptance. Are there any contracts that do not need a signature? The most obvious type of contract that does not require a signature to be valid is the oral contract discussed above. In that case, neither party signs the contract. In order to be valid, the oral contract must have the following basic requirements: What makes a contract invalid under federal law? When a contract is void, it is not valid. It can never be enforced under state or federal laws. A void contract is null from the moment it was created and neither party is bound by the terms. Think of it as one that a court would never recognize or enforce because there are missing elements. What makes a contract valid in the US? To make a contract valid, any offer that’s been made needs to be accepted by the other party. This tends to be a typical part of the contract process. What makes a contract valid and what makes it invalid? The two basic elements of a valid contract are “offer” and “acceptance”. One party makes an offer (outlines what is provided), and the other party accepts the terms of the offer (usually in writing). Acceptance can take time, whereby the negotiation process takes place until an agreement is reached. What makes an oral contract a valid contract? For an oral contract to be valid, it must contain these three elements: an offer, an acceptance of that offer and consideration in which each party receives something of value through an exchange to serve as the purpose of the contract. Which is not a valid offer in contract law? Another type of offer is one that is implied. When conveying the desire to make an offer through signs or acting, this may be taken as an implied offer. However, if one of the parties observes silence in the transaction, an implied offer isn’t considered valid.
OPCFW_CODE
Android: How to avoid callback hell in retrofit2 enqueue function? I am a beginner Android developer. I've had similar callback hells before, but I'm confused as even Googling doesn't tell me exactly how to avoid callback hells in these case. fun requestGetProduct(id: Long) { repository.getProduct(id).enqueue( object : Callback<ProductEntity> { override fun onResponse( call: Call<ProductEntity>, response: Response<ProductEntity> ) { if (response.code() == 200) { _title.postValue(response.body()?.title) _price.postValue( "${DecimalFormat("###,###").format(response.body()?.price)} 円" ) _description.postValue(response.body()?.description) if (response.body()?.imagePath != null) { repository.getProductImage( id, response.body()!!.imagePath!! ).enqueue( object : Callback<ResponseBody> { override fun onResponse( call: Call<ResponseBody>, response: Response<ResponseBody> ) { if (response.code() == 200) _imageByteArray.postValue( response.body()?.bytes() ) } override fun onFailure(call: Call<ResponseBody>, t: Throwable) { Log.e("サーバーエラー:", t.stackTraceToString()) } } ) } } } override fun onFailure(call: Call<ProductEntity>, t: Throwable) { Log.e("サーバーエラー:", t.stackTraceToString()) } } ) } In the case of retrofit's callback function, I write something in the current scope with onResponse, how can I make getProductImage() know that the response from getProduct() was successful in onResponse? Often people tell me to use RxJava or Coroutine, but I want to use Coroutine. It would be helpful if you could tell me exactly how to use Coroutine to avoid callback hell in Retrofit. I think it will be easier to understand by looking at the code than by hearing the explanation. It was long talk, but in conclusion, when getProduct() is successfully completed using a coroutine for easy readability, I want to write it so that it looks like sequential code so that getProductImage() is executed. check out this link for usage of Rxjava and coroutine instend callback mechanism,It might be help full. https://www.lukaslechner.com/comparing-kotlin-coroutines-with-callbacks-and-rxjava/ @MaulikTogadiya thank you. In order to implement this, should the return object of each function of the API service class be something instead of Call and add 'suspend' before 'fun OOO'? Use suspend but don’t return Call. Just return the thing Call would have wrapped if you weren’t using suspend. When you call the function, use try/catch. The catch block takes the place of onFailure. But you can wrap a series of calls in try with only one catch. @Tenfour04 Thank you ! When I want to see api's response code, Can I change return type SomethingResponseEntity to Response if I want to know the response code or errorbody too? It's better to split your post to two separate posts. One for the 'response' question. And the other for the coroutines implementation. @Mahmoud I think the comments on this post have given people like me the answer they want, so I think it will be helpful for people like me who didn't know about retrofit2 and async even if I leave it as is :)
STACK_EXCHANGE
I have just read an interesting article titled Why Crunch Mode Doesn’t Work which documents the research on efficiency vs amount of time spent working (and by inference amount of time spent on leisure activities and sleep). It shows that a 40 hour working week was chosen by people who run factories (such as Henry Ford) not due to being nice for the workers but due to the costs of inefficient work practices and errors that damage products and equipment. Now these results can only be an indication of what works best by today’s standards. The military research is good but only military organisations get to control workers to that degree (few organisations try to control how much sleep their workers get or are even legally permitted to do so), companies can only give their employees appropriate amounts of spare time to get enough sleep and hope for the best. Much of the research dates from 80+ years ago. I suspect that modern living conditions where every house has electric lights and entertainment devices such as a TV to encourage staying awake longer during the night will change things, as would ubiquitous personal transport by car. It could be that for modern factory workers the optimum amount of work is not 40 hours a week, it could be as little as 30 or as much as 50 (at a guess). Also the type of work being done certainly changes things. The article notes that mental tasks are affected more than physical tasks by lack of sleep (in terms of the consequences of being over-tired), but no mention is made about whether the optimum working hours change. If the optimum amount of work in a factory is 40 hours per week might the optimum for a highly intellectual task such as computer programming be less, perhaps 35 or 30? The next factor is the issue of team-work. In an assembly-line it’s impossible to have one person finish work early while the rest keep working, so the limit will be based on the worker who can handle the least hours. Determining which individuals will work more slowly when they work longer hours is possible (but it would be illegal to refuse to hire such people in many jurisdictions) and determining which individuals might be more likely to cause industrial accidents may be impossible. So it seems to me that the potential for each employee to work their optimal hours is much greater in the computer industry than in most sectors. I have heard a single anecdote of an employee who determined that their best efficiency came from 5 hours work a day and arranged with their manager to work 25 hours a week, apart from that I have not heard any reports of anyone trying to tailor the working hours to the worker. Some obvious differences in capacity for working long hours without losing productivity seem related to age and general health, obligations outside work (EG looking after children or sick relatives), and enjoyment of work (the greater the amount of work time that can be regarded as “fun” the less requirement there would be for recreation time outside work). It seems likely to me that parts of the computer industry that are closely related to free software development could have longer hours worked due to the overlap between recreation and paid work. If the amount of time spent working was to vary according to the capacity of each worker then the company structures for management and pay would need to change. Probably the first step towards this would be to try to pay employees according to the amount of work that they do, one problem with this is the fact that managers are traditionally considered to be superior to workers and therefore inherently worthy of more pay. As long as the pay of engineers is restricted to less than the pay of middle-managers the range between the lowest and highest salaries among programmers is going to be a factor of at most five or six, while the productivity difference between the least and most skilled programmers will be a factor of 20 for some boring work and more than 10,000 for more challenging work (assuming that the junior programmer can even understand the task). I don’t expect that a skillful programmer will get a salary of $10,000,000 any time soon (even though it would be a bargain compared to the number of junior programmers needed to do the same work), but a salary in excess of $250,000 would be reasonable. If pay was based on the quality and quantity of work done (which as the article mentions is difficult to assess) then workers would have an incentive to do what is necessary to improve their work – and with some guidance from HR could adjust their working hours accordingly. Another factor that needs to be considered is that ideally the number of working hours would vary according to the life situation of the worker. Having a child probably decreases the work capacity for the next 8 years or so. These are just some ideas, please read the article for the background research. I’m going to bed now. ;)
OPCFW_CODE
import numpy as np import pdb # Complementary Code J = 2, M = 2, N = 8 # c_p = 1 # pulse compression radar c_p = 1/np.sqrt(2) class Node(): def __init__(self, selfPlay, cummulativeMove, num_Moves, n_actions, evaluating_fn, calc_reward, fromlink, parent=None): # a node has a state (of length num_Moves), a parent self.selfPlay = selfPlay self.move = cummulativeMove self.num_Moves = num_Moves self.n_actions = n_actions self.evaluating_fn =evaluating_fn self.calc_reward = calc_reward self.fromlink = fromlink self.parent = parent # a node has n_actions edges self.N_sa = np.zeros(self.n_actions) # N(s,a) self.W_sa = np.zeros(self.n_actions) # W(s,a) self.Q_sa = np.zeros(self.n_actions) # Q(s,a) # evaluate this node using DNN, to get self.Prior_sa and value v self.state = np.append(self.move, np.zeros(self.num_Moves -len(self.move))) if self.terminalNode() == True: self.Prior_sa = np.zeros([1, n_actions]) - 2 # of no use self.value, _ = self.calc_reward(self.move.reshape([1,len(self.move)])) else: self.Prior_sa, self.value = self.evaluating_fn(self.state.reshape([1,len(self.state)]), self.selfPlay) self.reserve_Pr = self.Prior_sa # add noise to root node if self.selfPlay == 1 and self.parent == None: self.root_noise() # a node has at most n_actions children, initialize to 0 children self.children = [] self.seenChildIndex = [] def root_noise(self): self.Prior_sa = 0.75*self.reserve_Pr + 0.25*np.random.dirichlet(alpha0*np.ones(len(self.reserve_Pr[0]))) def terminalNode(self): if len(self.move) == self.num_Moves: # the height of the tree <= num_Moves return True return False def add_child(self, cummulativeMove, fromlink): child = Node(self.selfPlay, cummulativeMove, self.num_Moves, self.n_actions, self.evaluating_fn, self.calc_reward, fromlink, self) self.children.append(child) def __repr__(self): s="I am a node\nmy move is %s\nI have %d children"%(self.move, len(self.children)) return s def bestMove(node): QplusU = node.Q_sa + c_p * np.sqrt(np.sum(node.N_sa)) * node.Prior_sa[0] / (1 + node.N_sa) bestvalue = np.max(QplusU) bestmoves = [] for index, value in enumerate(QplusU): if value == bestvalue: bestmoves.append(index) try: moveIndex = np.random.choice(bestmoves) except: pdb.set_trace() return moveIndex def cal_piVec(N_sa, tau = 0): if tau == 1: probs = N_sa/np.sum(N_sa) elif tau == 0: probs = np.zeros(len(N_sa)) probs[np.argmax(N_sa)] = 1 else: probs = softmax(1.0/tau * np.log(N_sa)) return probs def softmax(x): probs = np.exp(x - np.max(x)) probs /= np.sum(probs) return probs def Backup(node, value): # we update the edges, rather than nodes while node.parent != None: pos = node.fromlink node = node.parent node.N_sa[pos] += 1 node.W_sa[pos] += value node.Q_sa[pos] = node.W_sa[pos]/node.N_sa[pos] # node.Q_sa[pos] = node.Q_sa[pos] + (value - node.Q_sa[pos]) / node.N_sa[pos] # running average return def TreePolicy(node, VisitedState, stepSize, flag): # keep searching along the existing tree until terminal state - Two cases when return: # 1. find an unseen child, then return this child # 2. search to the bottom of the tree, then return the arriving node while node.terminalNode() == False: # calculate Q+U to choose an edge, this edge leads to a child nextmoveIndex = bestMove(node) # move with maximum Q+U if nextmoveIndex in node.seenChildIndex: # nextState is already in Tree - return the child and continute searching node = node.children[node.seenChildIndex.index(nextmoveIndex)] else: node.seenChildIndex.append(nextmoveIndex) # unseen state - first add a child, then return this new leaf node realNextMoves = np.array([(nextmoveIndex>>k)&1 for k in range(0,stepSize)])[::-1] cummulativeMove = np.append(node.move, 2*realNextMoves-1) node.add_child(cummulativeMove, fromlink = nextmoveIndex) if flag == 1: VisitedState.store(cummulativeMove) return node.children[-1] # return the newly-added leaf node return node def MCTS_main(args, VisitedState, stepSize, n_steps, evaluating_fn, calc_reward, selfPlay): global alpha0 alpha0 = args.alpha flag = selfPlay * args.recordState # initial state cummulativeMove = np.array([]) temp_store = [] # Run one episode eachstep = 0 while eachstep < n_steps: # temperature parameters in MCTS if eachstep <= int(n_steps/3.): tau = 1 else: tau = 0 if eachstep == 0: root = Node(selfPlay, cummulativeMove, args.N*args.K, args.Q**stepSize, evaluating_fn, calc_reward, None, parent=None) for _ in range(args.simBudget): root.root_noise() v_l = TreePolicy(root, VisitedState, stepSize, flag) # from v_0 to v_l Backup(v_l, v_l.value) # back propagation piVec = cal_piVec(root.N_sa, tau) # temporarily store currentState = np.append(cummulativeMove, np.zeros(args.N * args.K -len(cummulativeMove))) if selfPlay == 1: temp_store.append([currentState, piVec]) # update state -> go to next time step nextMove = np.random.choice(args.Q ** stepSize, 1, p = piVec)[0] root = root.children[root.seenChildIndex.index(nextMove)] root.parent = None cummulativeMove = root.move eachstep += 1 return cummulativeMove, temp_store
STACK_EDU
It has been a year since close reasons were revamped across the Stack Exchange network, giving individual sites the ability to define a set of custom reasons explaining why a question is off-topic. At that time, SE Community Coordinator Shog9 "seeded" us with an initial set of custom reasons, and directed site moderators to develop and refine their custom close reasons in consultation with the community. This was never done. Despite an effort to get this conversation started at the time—an effort that, it should be noted, originated from the community and drew little attention from the moderator staff—we are still using the original custom Off Topic reasons Shog9 wrote for us a year ago. As Shog9 himself noted during the recent election campaign, we "could probably do a lot better." At the same time, in a meta post that currently has 16 net upvotes, Shog9 strongly urged us to define the limits of General Reference, including getting a lot more specific about what reference sources should be considered GR for which questions, and helping people understand how they can find answers for GR questions. There was considerable discussion in answers and comments to that question, but nothing ever came out of it. Shog9 has urged us to return to this question and actually come up with a resolution. What is the process for proposing a change to site policies and governance and seeing it through to resolution? We are very good at chewing issues to death on meta, and very bad at ever actually doing anything to change anything. I made a specific proposal in January to clarify and narrow how the GR close reason should be used. It sits at +13, with 19 upvotes and 6 downvotes. Despite attracting a reasonable amount of discussion, it has never officially been accepted or rejected, and as far as I can tell, no one from the moderation staff has ever weighed in on it at all. I'm not wedded to this proposal, and if I were to propose it today I'd probably make some changes to it. But to not even have it get rejected leaves me wondering where we go from here. If this is not how we're supposed to have these conversations, please tell us what we should be doing instead. What do we have to do to satisfy these requirements? Does the process need to originate with the moderation staff? If so, why didn't that ever happen? What can we do to help craft a policy that will work for this site? How is the sentiment of the community determined, and to what extent does that sentiment matter? Who makes the ultimate decision, and how will we find out what the decision is? Just tell us what we're supposed to do to help get the process started, and we'll do it. Doing nothing is no longer an option.
OPCFW_CODE
[BUG] Incorrect memory unit conversion in project resource quota Rancher 2.6.1 Create a project with the following resource quota assigned: Memory limit: 512 MiB (Namespace default: 256 MiB) Memory reservation: 256 MiB (Namespace default: 128 MiB) Create a namespace in the project Inspect the resourcequota Kubernetes resource that is created in the namespace Expected Result: Based on configured namespace default limit, a memory limit of 256 Megabytes and memory reservation of 128 Megabytes is configured. Actual Result: A memory limit of 256 Bytes and memory reservation of 128 Bytes is configured My checks PASSED Validation Environment Component Version / Type Rancher version v2.6.3-rc2 Rancher cluster type Docker Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2 Helm version v2.16.8-rancher1 Downstream cluster type RKE1 Linode Downstream K8s version v1.21.6 Browser type Google Chrome Browser version 96.0.4664.55 (Official Build) (x86_64) Validation steps Starting from the Rancher homepage /dashboard/home Click the hamburger menu -> Cluster Management -> Explore next to an active cluster (one described in reproduction environment) -> Project/Namespaces from the left side menu -> Create Project Enter a Name for the the project for example: hello-world-project -> Click Resource Quotas -> Click Add Resource twice Now fill out the two resources Memory Limit Project Limit: 512 Namespace Default Limit: 256 Memory Reservation Project Limit: 256 Namespace Default Limit: 128 Click Create -> you are then redirected back to the Clusters page. See screen shot labeled A1 for what the network traffic was like for the POST request to v3/projects :star: The values are just being sent as 512, 256, & 128 with no identifiers, meaning they will be interpreted as bytes You have been redirected back to Projects/Namespaces Under the newly created Project hello-world-project click Create Namespace -> enter a name hello-world-namespace -> click Create -> redirected back to Projects/Namespaces Click Kubectl Shell and run the following commands kubectl get ns kubectl get quota -n hello-world-namespace Observe the output listed directly below > kubectl get quota -n hello-world-namespace NAME AGE REQUEST LIMIT default-tz6k2 16m requests.memory: 64Mi/128Mi limits.memory: 64Mi/256Mi Click Workload from the left side menu -> Deployments -> Create Create deployment USING THE NEWLY CREATED NAMESPACE Name: hello-world-deployment Namespace: hello-world-namespace Container Image: brudnak/hello-rancher-golang :star: Resources - Memory Reservation: 64 :star: Resources - Memory Limit: 64 Click Create You are then redirected back to Deployments, everything is successful now
GITHUB_ARCHIVE
As we have claimed above, we've been pros who will Focus on your behalf with precise methods for each dilemma. In this article, You're not acquiring help from beginner writers. You pay a person to accomplish your assignment for you and in return, get a wonderfully prepared assignment which is sufficient to show your leadership and innovative expertise into the teachers. Our custom made essay composing provider has no equivalent! We will let you produce something Distinctive—knowledgeable paper that could fulfill you and your instructor. An assignment that features a uninteresting topic or isn’t intriguing to put in writing about could be tough to get a university student who doesn’t come to feel enthusiastic about it. Consequently to execute well appropriately Along with the outlined recommendations within the university, all of that 1 simply just needs to do is request someone who is an expert to jot down their paper for them. In case you pay back another person to write your assignment then they must also offer you money back assures in the event that They are really unable to generate or provide your work as per tips. A large number of students all all over the world are inclined to request assistance for homework assignments that prove to be much too challenging to be done on their own. Some of them request out Qualified academic crafting products and services on line, but are unsure which ones they could have confidence in. The answer to their trouble is BuyAssignmentService. Spend an individual for getting aid now. Use homework help on the internet. Absolutely the most dependable provider to assist with homework. Matlab project assignment assistance and homework assistance. Geometry homework support supplied by our progressive on the web assist. Homework assist studio an essay or A few other homework composing assistance for a fantastic selling price. Find out more details on our higher education homework enable providers and straightforward signup. To understand and enable out with math homework. Tutoring is really a sacred job. It is the responsibility of every tutor to information The scholars with his/her knowledge and comprehension. We in this article at HandySolutions think that it can be an honor to construct the future of learners and we have been destined to aid them. Quick Hyperlinks Is there presentation and readability from the assignment acceptable and do they provide you a plagiarism detection report for your function? Wonderful job! I’m the type of one who desires inspiration to jot down a great paper, and if I’m in a nasty temper or sense indifferent regarding the assignment, I'm able to’t next do the operate. Thank God I’ve uncovered you! You solved my complications. Nonetheless, there will come a time when just about every college student has to write and produce a paper dependent on their own subjects and matters and this is where Expert companies will help them out. We provide click you with the following resolutions to the problems by which Expert services can ideally regulate even probably the most challenging of matters easily and offer you huge assignment assist. A substantial team of Specialist writers are less than a single roof acquiring many years of practical experience and qualification. These days, you will discover actually 1000s of crafting businesses that declare for being authentic and credible for getting your homework finished, but The majority of them are ripoffs that wish to steal your cash. So come to us everytime you require help for your homework due to the fact we haven't Enable our customers down. “Producing is not my forte and it great site usually can make me experience pressured out. So Normally when I was assigned my project, I was not at relieve with it And that i experienced to discover somebody to escape me from this situation. Pay out Me to perform Your Homework will now not be in Affiliation with any ASU scholar. Any ASU university student that takes advantage of our support will be in violation of various instructional establishments’ college student conduct policies or honor codes and could end in university student discipline, which includes achievable expulsion and (ii) our providers are no longer accessible to any ASU college students pursuant to an original site comprehension attained with ASU.
OPCFW_CODE
The Game of Life is an old programmer’s puzzle that teaches teenagers to write and analyse their own code. The goal of the game is to simulate the development (life and death) of cells within one organism. An organism is a matrix of arbitrary size, and the rules are as follows: if a cell is alive and surrounded by 2 or 3 living cells, it will survive. If there is an empty position and is surrounded by 3 living cells, a new cell will be born. How to create this simulation in Excel? In our example, the organism is a 20×20 cell matrix. The simulation is performed by two main and one auxiliary subroutine. The first aims to reset the matrix, i.e. to set the array values to zero, and then to determine by random method which cells will be alive and which won’t. The second subroutine checks conditions and, depending on them, determine which cells will live and which ones will die. The last subroutine serves only to delete cells within a spreadsheet matrix. We begin the initialization by resetting the array. Although life is in a 20×20 matrix, our array will be 22×22. This is important because of the computation of the sum of cells surrounding those located at the edges and in corners of the matrix. We will reset the matrix by setting the array members to 0, and then use the Excel function RANDBETWEEN to calculate where the cells are living by the random choice principle. This happens within 20×20 matrix. To complicate things a little, a Spin Button was added that changes the value of AF2 cell in the range of 1 to 5. The higher this number, the fewer cells will be generated initially. Following this procedure, the living cells in the matrix are given a value of 1 and the dead (empty positions) 0. The second subroutine is used to check and change the life status of individual cells. Using two nested loops that go from 1 to 20, each value in the matrix is checked and the sum of the cells surrounding it is calculated. If there is no life in the position (value 0) and it is surrounded by three cells, a new cell will be born. If a living cell (value 1) is surrounded by 2 or 3 adjacent cells, it will survive, and if more or less, it will die. In the end, we will use another double loop to show the living and dead cells in the given positions. We start the simulation by clicking the RESET button (the first subroutine), and proceed from phase to phase by clicking NEXT (the second subroutine). With each subsequent click, you will watch as the kaleidoscope of life changes. Play around, and when you’re bored change the rules of the game and watch what happens. “The Game of Life” is a simulation I first created, as a teenage programmer, back in 1989 and was introduced to it by Vlada Kostic, lecturer at the computer school led by Institute of Nuclear Sciences „Vinča“, whose course I attended. Now I’m passing forward the knowledge, if anyone wants to get an Excel simulation file they can contact me.
OPCFW_CODE
The Future for SharePoint Development Office 365 has made SharePoint simpler to install, simpler to manage and simpler to use. Under the direction of Microsoft CEO Satya Nadella, the focus of new innovation has been on lightweight, easy to use Apps. Planner, StaffHub, ToDo and Teams are all designed to be out of the box, zero training tools to help organisations become more efficient.This ‘simple and lightweight’ principal has also been applied to SharePoint. Since its release in 2001, it’s fair to say that SharePoint has been very ‘Marmite’. Acknowledged as being a powerful tool for organising information and managing business processes, it has never been loved by end users due to its clunky interface.Last year saw the release of SharePoint 2016, and with it, the ‘Modern Experience’. This takes the core SharePoint functionality – pages, lists, libraries – and re-skins them to give a more intuitive and modern interface. Common tasks, such as copying documents, are easier to achieve. And the underlying system that drives this has been brought up to date to reflect current trends in software engineering. So far, so good…The problem with the new SharePoint experience is that it was released before all of the features had been completed – a touch of cart before horse. Companies had begun to embrace SharePoint Online because it was readily available and becoming integrated with Outlook, OneDrive and Yammer. Suddenly there was confusion, with 2 interfaces, and no clear way to customise and extend the new one.This has left people in limbo – do we keep developing on the Classic Experience using the older (but mature) technologies? Do we develop using current technologies on the new system? Or do we hold fire until a clearer picture emerges?Hopefully this article can provide some clarity as to what direction to follow. The current landscapeAs always, there are different solutions for different problems. Previously, if you wanted to create some business processes on SharePoint, you would either: - Use SharePoint Designer to create some workflows and forms; - Use InfoPath to create more interactive form-based solutions, with Workflows giving you business logic; - Fire up Visual Studio and C# to write full-blown applications that run on your server. PowerApps and FlowThese are the no-code, business friendly, modern alternatives to Microsoft InfoPath and SharePoint Workflows.PowerApps is a Form Builder tool designed for modern devices. Without having to write any code, you can quickly create a PowerApp that will interact with a SharePoint List (or other data). Once created, you customise the App using drag-and-drop controls, changing the order of fields, adding validation and rules. To do more complex tasks, you can create a ‘Formula’ or connect to server code by building a Custom Connector.Flow is an engine that can perform tasks in the background as the user interacts with the PowerApp. It is designed to connect together any number of cloud systems – SharePoint, Dynamics, DropBox, MailChimp, Twillio, Twitter… And these tasks can be strung together using a flowchart-based designer. This is where you can instruct the App, when a user adds a new item to a list, to email the user’s manager, create a new lead in Dynamics, and add a record to MailChimp.Just like SharePoint 2013 Workflows, Flow allows you to create reasonably complex process, inlcuding loops and ‘if – then – else‘ statements. It can be attached to a list or library in SharePoint, or just run as part of the PowerApp.Flows and PowerApps can be saved as packages and version controlled. SharePoint Add-Ins (aka SharePoint Apps)When SharePoint moved online, Microsoft needed to find a way to allow people to run server-side code that wasn’t going to bring down their hosting service. Previously, developers had been able to install code to run on the servers in a ‘sandbox’ – i.e. in its own process so that, theoretically, it wouldn’t break anything else on the server. However, allowing people run any sort of bespoke code on Microsoft’s SharePoint servers was never going to be allowed for very long.SharePoint Apps, which were subsequently renamed SharePoint Add-Ins, were the solution to this. SharePoint Add-Ins are self contained applications that can be hosted anywhere. They communicate with your SharePoint Tenancy using one of a number of SharePoint APIs that Microsoft have made available. They can make use of SharePoint functionality, such as lists, libraries, pages, workflows and Web Parts.Because they are typically created using Visual Studio, they can be packaged up and redistributed to many SharePoint tenants/installs. They can also be placed on the SharePoint Store for people to purchase and download.SharePoint Add-Ins are Self Contained. To run them in your SharePoint site, you have three options: - Run them as full page applications – this means navigating out of your SharePoint Site into a new site. - Run them in your SharePoint page as an Add-In Part. This adds an iFrame to your page and runs the app in there. - As a custom UI action – add a menu item to kick of a process in the background. The SharePoint Framework – The Answer?So far we have considered: - PowerApps – Responsive, integrated with Modern Experience, but limited and potentially costly. - Sharepoint Add-Ins – Powerful, server-side code, but needs to run separately.
OPCFW_CODE
When your company decides to embrace Business Rule Framework as the tool for rapid application development and modular design thinking, then you can quickly begin to increase data quality in business processes. There are only a handful of vital documents in the system that can avoid problems in end-to-end business processes, such as sales orders, production orders and purchase orders. When these documents are always accurate and complete upon saving the data, then potential issues when processing subsequent documents are prevented (e.g. transportation, costing and billing). For example, if incorrect purchase order data leads to errors in the financial posting of a goods receipt, then issue an error when you try to save the purchase order document. That forces the user to correct the data or else the purchase order is saved as incomplete and no subsequent process steps are started. If you want BRF+ to make an immediate positive impact, then implementing functions to sanity check these essential documents is advised. Just create the function that returns messages. Depending on the type of message (e.g. abort, warning, error) the system will inform the user and suggest what needs to be done to resolve the issue. You can start with a limited number of validation rules and extend them the next weeks and months. Developers do not need to be involved when the input for validation remains unchanged. Sanity checks can be performed at various stages: - Adding, changing or deleting an item. - Saving the document. In some cases you wish to also perform a sanity check at other stages to verify whether relationships between items could impact data quality. For example, there might be business rules related to shipping point determination where the value could be influenced by data available maintained for a previous item. Obviously you can perform that task upon saving the data, but the user would not be able to see the changes made before pressing the save button. Then it would be useful to trigger this sanity check at any time during the document processing by adding an additional icon on the overview screen to trigger the sanity check using all the data currently available in the document. The beauty of BRF+ is the ability to re-use functions at various locations in the code. The trigger for sanity checking might be in different user exits and enhancement points, but the result will always be the same. This gives you control that there is always one version of the truth. The other benefit of BRF+ is the flexibility to arrange the sequence in which to perform sanity checks. Adding new rules is easy to maintain as well. Gone are the days that a minor functional update triggers a complex set of coding changes. When the input for sanity checking does not change, then adding or changing or removing functionality will not require the involvement of the development team. This will dramatically decrease the lead time to deploy new rules defined by the business users.
OPCFW_CODE
Windows Batch file - Date parse inside IF does not work I'm trying to parse date inside an IF statement, my actual code is slightly more complex than one below quoted (i'd use EnableDelayedExpansion) but still the date parsing is behaving weird to me. the expected output of the "ECHO" command would have been during my last attempt BK_20141111_1030.7z but it is shown as BK_20141111_10:30.7z. If i move the line Set PARSEARG="eol=; tokens=1,2,3* delims=:, " outside the IF statement the output is correctly shown. Is it possible to parse delimiters within an IF clause? DATE /T stores data in format GG/MM/YYYY (or MM/GG/YYYY) TIME /T stores time in format HH:mm @echo off :cmpct Set CURRDATE=%TEMP%\CURRDATE.TMP Set CURRTIME=%TEMP%\CURRTIME.TMP DATE /T > %CURRDATE% TIME /T > %CURRTIME% Set PARSEARG="eol=; tokens=1,2,3,4* delims=/, " For /F %PARSEARG% %%i in (%CURRDATE%) Do SET YYYYMMDD=%%k%%j%%i if 1==1 ( Set PARSEARG="eol=; tokens=1,2,3* delims=:, " For /F %PARSEARG% %%i in (%CURRTIME%) Do Set HHMM=%%i%%j%%k echo BK_%YYYYMMDD%_%HHMM%.7z ) ) what's the output from echo %time% ? and you do not need temp files .You can use directly %TIME% and %DATE% env. variables. added the bracket no modification in output, adding echo %time% inside and outside the IF statement, the output is the following: 11:29:05,16 BK_20141111_11:29.7z 11:29:05,16 new code @echo off :cmpct Set CURRDATE=%TEMP%\CURRDATE.TMP Set CURRTIME=%TEMP%\CURRTIME.TMP DATE /T > %CURRDATE% TIME /T > %CURRTIME% Set PARSEARG="eol=; tokens=1,2,3,4* delims=/, " For /F %PARSEARG% %%i in (%CURRDATE%) Do SET YYYYMMDD=%%k%%j%%i echo %time% if 1==1 ( Set PARSEARG="eol=; tokens=1,2,3* delims=:, " For /F %PARSEARG% %%i in (%CURRTIME%) Do( Set HHMM=%%i%%j%%k echo BK_%YYYYMMDD%_%HHMM%.7z %time% ) ) @echo off :cmpct Set PARSEARG="eol=; tokens=1,2,3,4* delims=/, " For /F %PARSEARG% %%i in ("%DATE%") Do SET YYYYMMDD=%%k%%j%%i rem echo %time% setlocal enableDelayedExpansion if 1==1 ( For /F "eol=; tokens=1,2,3* delims=:, " %%i in ("%time%") Do ( Set HHMM=%%i%%j%%k echo BK_%YYYYMMDD%_!HHMM!.7z rem %time% ) ) if 2==2 ( For /F "eol=; tokens=1,2,3* delims=:, " %%i in ("%time%") Do ( Set HHMM=%%i%%j%%k echo BK_%YYYYMMDD%_!HHMM!.7z rem %time% ) ) endlocal ? EDIT I saw your memo and updated my answer. You cannot parametrize the FOR command with delayed expansion. the %variables% are expanded during the parsing of the script. the !variables! (delayed expansion) are expanded during the execution. But FOR command checks it's syntax during parsing so using FOR !options! will always lead to parsing error. So the only option you have is multiple IF conditions or a subroutine. sorry but you workarounded by extracting Set PARSEARG="eol=; tokens=1,2,3* delims=:, " from outside the IF statement, I'd like to parse the text only in case specific condition is verified. I'm speachless, you solved my problem even without directly asking a solution for it, still, just for curiosity sake, not using delayed expansion, can you explain why Set PARSEARG="eol=; tokens=1,2,3* delims=:, " is working fine outside the IF statement but it is not from within? The FOR statement still remains within the IF statement so it should not be that the real cause @spike_ge - because of the brackets. Every variable defined/changed inside brackets take an effect when the brackets are closed - http://ss64.com/nt/delayedexpansion.html
STACK_EXCHANGE
MathPlayer Can Speak! MathPlayer 2.0 contains an early version of Design Science's math-to-speech technology and is intended as only a demonstration to stimulate interest in math accessibility. Design Science plans to improve the quality of what is spoken and realizes that much more work needs to be done before acheiving a truly accessible interface. In December, 2003, Design Science was awarded an NSF grant to research ways to improve the accessibility of mathematics. Two features that Design Science is investigating are (a) user navigation of expressions for better comprehension of complex mathematical expressions and (b) support for various mathematical braille formats for output to braille displays and embossers. Users with certain learning disabilities would benefit from synchronized speech and subexpression highlighting, and Design Science is investigating including this into MathPlayer You can make MathPlayer speak the math embedded in a web page two ways: - Right-click on an equation and choose the Speak Expression command. - Use a screen reader product that will read the entire web page and invoke MathPlayer to speak the math. Just as a test, right-click on the equation below and choose Speak In order for this demonstration to work, you must have a MathPlayer-compatible text-to-speech engine installed on your computer. If you don't have such an engine, MathPlayer will display a dialog directing you to this page. Please follow the instructions for installing a text-to-speech engine in the next section. Users with low vision may also benefit from MathPlayer's MathZoom feature; simply click on the expression and an enlarged version will appear -- click again to close it. are not running Windows XP, an additional download of a text-to-speech engine may be required. You can download a free text-to-speech engine from Microsoft. First download and install Microsoft Reader and then download and install the text-to-speech engine mentioned on that page. The installer will suggest that you "activate" Microsoft Reader -- this is not necessary for MathPlayer. You can change the voice that it is used to speak the math, along with the rate and volume of the speech using Window's Speech Control Panel. Select the "Text to speech" tab to see your speech options. Many people prefer the female voice for speaking math. Higher quality text-to-speech engines can be purchased from other vendors. For example, AT&T Natural Voices are compatible with MathPlayer's "speak expression" command and can be purchased at one of the sites listed on their Web page. Other speech engines should work if they support Microsoft's SAPI 5 interface. MathPlayer 2.0 implements Microsoft's Active Accessibility (MSAA) interface so that assistive software, such as screen readers, can seamlessly take advantage of MathPlayer's math-to-speech technology. Most screen readers make use of this standard interface. As of this writing, screen readers known to work with MathPlayer include Window-Eyes 4.21 and 4.5, HAL, Read & Write v6, and JAWS (v5.00.844). If math accessibility is important to you, contact your screen reader vendor so that they will consider supporting some of the planned accessibility enhancements to MathPlayer. Without your input, vendors may not make math accessibility a priority. It is very common for textbooks and technical papers to embed snippets of math such as inline. Even short equations and inequalities such as are common. In fact, upwards of 90% of all math expressions in technical papers are short, inline expressions. Here are some larger examples of display math: Sum and integral examples: Example of roots: Example of a table: This ambiguity test is one on which MathSpeak does poorly: A double angle formula: And finally, no math page would be complete without the solution for the quadratic equation: All of these examples were written in Microsoft Word and MathType and exported to MathML using MathType’s “MathPage” technology. MathPage technology was added to MathType in version 5.0. No special work is needed to author the expressions to make them accessible. Any product that exports MathML will produce pages that MathPlayer can speak. For a larger real life example, see this page. Also, MSN Encarta uses MathML on many of their web pages that contain math, so much of their Math should be accessible using MathPlayer.
OPCFW_CODE
pro-iv record locking question Posted 06 February 2012 - 02:15 PM I have a paging screen where a user can modify records in a table. I have another update function that reads through this same table in delete mode to try and process these records and delete them. Once I go into a record in the paging screen, if I try and run the update function, it will just hang until I exit out of the paging screen (without any type of message), then it will finish. I want to be able to ignore the record that is being locked and just process the remaining records in the update function. I put DSEL in the after read error logic and record lock logic in the update function, but it doesn't seem to get to that code since I also have messages to display and they do not get displayed. Even if I get out of the record in the paging screen and scroll down to other records, it seems to keep locks on one or more records. Any suggestions on how to handle this? Thanks. Posted 06 February 2012 - 05:06 PM Here is what I have: Key field - list of files to be read: File: XXXX ACDL Update program (loops through file XXXX): Primary File - XXXX Mode: D Record Lock Logic: If I sit on a record in the paging screen, then start the update, the update shows: UPDATE IN PROGRESS - PLEASE WAIT until I exit out of the paging screen. In the Lock Logic (in the Update) you need to put the code: This tells the Function to stop waiting on the lock on this record. Posted 06 February 2012 - 09:01 PM Try setting up the update as: Primary File XXXX L 02 XXXX D (i.e. have the file in twice with the Primary in L mode, then as the second file in D mode (with the SUPPRESS_RETRY Lock logic on File 2). Posted 09 February 2012 - 10:22 AM What database are you using ISAM, ORACLE ? You have described that there is a screen function where a user can change records an update function that another user/session can delete the same records The user in the screen is not going to have a good user experience, that records they have selected for processing in a paging screen are deleted at same time by another process/user. A possible solution (I have used this in past) re-design the screen function a) in an update function load the selected keys and maybe some data fields into a workfile perhaps you could set a value on the data record to show that this record is on-hold/selected for processing (generally I'd record OPR id and process-function that has selected/held the data record) use this file as the primary file in your paging screen and the user changes values on this file c) on exit from the paging screen call an update to apply the changes captured. Make sure that you remove the value that identifies the record(s) is being processed. If the update finds that the data record(s) has been deleted (no longer present) then you could present the keys of the missing records on an error report/screen to the user. you could also re-design the update function a) read the table fist in Lookup mode and check the value on the data record that identifies if a record has been selected by another process - is lready seleced than DSEL. Then read the table for a 2nd time in Delete Mode Hope this helps Reply to this topic 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
For the better performance delete cookies,temp files from the computer. Uninstall all the unwanted softwares from the computer. Run disk clean up Open Disk Cleanup by clicking the Start button , clicking All Programs, clicking Accessories, clicking System Tools, and then clicking Disk Cleanup. In the Disk Cleanup Options dialog box, choose whether you want to clean up your own files only or all of the files on the computer. If you are prompted for an administrator password or confirmation, type the password or provide confirmation. If the Disk Cleanup: Drive Selection dialog box appears, select the hard disk drive that you want to clean up, and then Click the Disk Cleanup tab, and then select the check boxes for the files you want to delete. When you finish selecting the files you want to delete, click OK, and then click Delete files to confirm the operation. Disk Cleanup proceeds to remove all unnecessary files from your computer. .To run disk defragmentation.1. Open the Start Menu. A) In the white line (Start Search) area, type dfrgui B) Click on All Programs , System Tools then on Disk Defragmenter Increase the virtual memory of the computer. How to increase Virtual Memory? In Windows XP 1.Click Start, and then click Control Panel , and then click System. 3.On the Advanced tab, under Performance, click Settings 4.On the Advanced tab, under Virtual , click Change. 5.Under Drive [Volume Label] , click the drive that contains the paging file that you want to change. Paging file size for selected drive, click to Custom size box. You can enter the amount of memory you would like to reserve for Virtual memory by entering the initial and maximum size. you are prompted to restart the computer, click Yes.Special Note: should choose the same amount for the initial size and maximum size. This will Stop your CPU from constantly changing the paging file.HOT TIP : To stop your CPU from constantly changing the paging file, set the initial and maximum size to the same value. For example, 500 and 500. The value should be at least 1.5 times more than your physical RAM. If your computer has 512MB of RAM increase the virtual memory paging file to 1. Click Start button Picture of the Start button 3. Choose System and Maintenance and then click 4. In the left pane, click Advanced system 5.On the Advanced tab, under Performance, click 6. Click the Advanced tab, and then, under Virtual memory, 7. Click Custom to change the Initial size (MB) and Maximum size. See the hot tip above. This will improve the performance of the computer.
OPCFW_CODE
Linux - Set-up a client-server network from scratch I have been looking for this, but haven't been able to find anything "complete" out there, probably because it's such a standard setup that people think it's included in the human genome by now. I know they are a lot of things in one question, but they are all related. Here's what I would like to do: Have one (in a more involved scenario more than one) Linux server, where information about users is stored. Have several clients (7-800, with a mix on Linux and Windows) from which users can work. Users have different roles (e.g. students, teachers and staff). Depending on the role, they have different permissions and different programs they need to use (students don't need to use the accounting software). Users can login on any computer in the network and always see their desktop the way they left it when they last logged out. All the programs they need will be there, with the possibility for the administrator to add, update or remove programs for any users group. All users of a certain category (say teachers) should each have his/her own folder and a shared folder where they put things they are working on together. Ideally, users (at least some of them) should be able to login from home over the internet and still see their desktop with their programs, just as if they were in the office. Bonus features: When a new user needs to be created (new students come at the beginning of the new year) their information (Name, Address, Birth date, etc.) can be imported from a csv file and the system will automatically give them an initial random password (which they will have to change on first access) and create a mail account for them (like<EMAIL_ADDRESS>When the organization decides to change the 7-800 computers or to buy 2000 new ones, it should be possible to configure one of them and then "clone" the configuration on all others (if not at one time, at least in batches of some tens of them) So the question is not necessarily how to do this. It would suffice to point to detailed information online. What I could find is not complete and very scattered around the net, so I can't put it back together to save my life. It sounds like you are wanting a thin-client solution. That's what it was called in the old days anyway. Nowdays this is VDI (Virtual Desktop Infrastructure). The client devices have no physical storage and very little memory and CPU. Most of that is done on a server somewhere out of the way in a closet. Several vendors play in this space. Microsoft, Citrix, HP, IBM and Dell. I think Oracle got out and all their VDI stuff is now EOL? You can roll this type of solution on your own however you will spend hours upon hours making it all work properly and then more hours teaching people to use it. To say nothing of the hours spent supporting it after the fact. If you want a stable solution that just works, look at the turnkey solutions from vendors I mentioned previously. Do some homework, perhaps do a lab to demo the solution before picking a deployment. Just google "VDI for education". You can also look at "thin client for education" solutions offered by most of the big name IT vendors. I'm sure they would bend over backwards to help you if you talk to them.
STACK_EXCHANGE
/* * Math core zero header * * This file is part of the "ForkENGINE" (Copyright (c) 2014 by Lukas Hermanns) * See "LICENSE.txt" for license information. */ #ifndef __FORK_MATH_CORE_ZERO_H__ #define __FORK_MATH_CORE_ZERO_H__ #include "Math/Core/MathConstants.h" #include <cmath> namespace Fork { namespace Math { /* --- Template 'Zero' function with tolerance parameter --- */ /** Approximated equality check with tolerance parameter. For integer types it is not an approximation but a correct comparision (a == 0) where the tolerance value is ignored. */ template <typename T, typename E = T> inline bool Zero(const T& x, const E& tolerance) { return x == T(0); } //! Approximated equality check with default tolerance parameter. template <typename T, typename E = T> inline bool Zero(const T& x) { return Zero(x, E(0)); } /* --- Specialized 'Equal' functions --- */ template <> inline bool Zero<float, float>(const float& x, const float& tolerance) { return std::abs(x) < tolerance; } template <> inline bool Zero<double, double>(const double& x, const double& tolerance) { return std::abs(x) < tolerance; } /* --- Template 'Equal' function with default tolerance parameter (epsilon or epsilon64) --- */ template <> inline bool Zero<float, float>(const float& x) { return Zero(x, epsilon); } template <> inline bool Zero<double, double>(const double& x) { return Zero(x, epsilon64); } } // /namespace Math } // /namespace Fork #endif // ========================
STACK_EDU
That preceding discussion brings me to an idea that I shared with few of your, but I think I would like to get a response from a wider audience of this secret society. Both Pros and Cons are welcome. Like some of you, over the years, I had projects of variable quality delivered to me when I outsourced the development. It have been getting better only because I have been managing my projects more tightly. Being a developer myself I find a need for more standardized marketplace, project management tools and education. Elance, oDesk, rentA-whatever... and a like only manage hiring, review and payment process, but don\'t manage or guaranty quality of outcome. 1) No guaranty of quality by marketplace; 2) Ether overpriced bids from large providers or very low bids from smaller providers, that eventually end up delivering inferior product. Both in larger part are due to projects not being well defined by the project owners. 3) Project owners don\'t have a clue what their idea may cost to implement. 4) Millions of developers writing essentially same code that other developers had already perfected. "Software Factory" where: 1) Anyone can become a provider or buyer. 2) A Marketplace where as part of prerequisite to become a provider one must agree to deliver based by marketplace\'s standards of quality or not get paid and may even get a negative review or get penalized in other ways. All backed by independent review process. 3) Requirement to pass training or a test in how to provide quality service. 4) Standardized Software Project management tools on marketplace-wide bases. 5) Repository of templates of projects that in high detail describe functionality and show user interfaces and list firm bids and time frames from providers. Anyone can reuse a template and change it for their own project to make their implementation of that project unique. - This is done to show to project owners what needs to be provided to developers for them to provide a firm price and time frame on a project and it provides to developers exact requirements of what someone wants them to code. 6) A reusable code repository that could be used to assemble an application very quickly. Functionality that doesn\'t exists is well defined and is sent to the marketplace for coders to bid on. After a new functionality is coded it becomes a part of repository and used as a small block in the sponsoring project. Additionally, the coder of any block gets a bit of extra revenue each time someone reuses that code block. Benefit to project owners: they get 100% project completion guaranty ( while industry wide research from a reputable research firm reveals that over 65% of software projects are ether over-budget, over-time or never completed). Their projects are completed with higher quality, on time, on budget and at a lower cost. Issue of who owns the sourcecode I know how to solve. They don\'t have to worry of sourcecode being stolen or to give a lot of equity to technical co-founder or in some cases needing a technical co-founder. Being a technical co-founder myself I am not even a bit worried that I will loose any equity because of that system because I get what I want that works, fast. This removes business risk of software development failure. Benefit to developers: They get to produce high quality code in projects that get finished. They get paid each time their code is reused in hundreds or even thousands of projects. (Like syndicated writers) They can work faster because large part of thinking of how to implement something already had been done. So lay it on me, how is this a bad idea? Anyone wants to do this? 1) I got no welcome email to the group, I just started receiving individual messages from Google groups. 2) In the Community Guidelines it clearly stated "It is OK to be critical about an idea or opinion, but please be constructive." So, to be critical about an idea there needs to be an idea proposed in a first place. 3) People without ideas are brain-dead. If I can\'t discuss ideas with anyone then this whole thing is 50% or more useless. 4) If you have problem with amount of messages in your email box, get better email box with filtering to put messages into folders or get a better group/forum software and don\'t prohibit creativity. 5) And since we are talking about tagline, your tagline is not accurate to facts of life. Both ideas and people change. --- On Wed, 12/5/12, Hayden Tay <hay...@founderdating.com> wrote: From: Hayden Tay <hay...@founderdating.com> Subject: Re: [FD Members] Software Factory - not SPAM, but a business idea To: "Max" <alphaon...@yahoo.com> Cc: "founderdating" <firstname.lastname@example.org> Date: Wednesday, December 5, 2012, 9:49 PM A note that we do not allow discussion of business ideas here. Sticking to our tagline - we believe strongly in people and less about the ideas (not to mention a flurry of emails discussing all the ideas we have will probably kill our inboxes). This rule is indicated in your welcome email to the group, but I missed putting it in the Community Guidelines. I apologize and will be updating the guidelines to reflect this. If you have further thoughts about improving the community or any suggestions for the Community Guidelines, please respond to me directly. A note that we do not allow discussion of business ideas here. Sticking to our tagline - we believe strongly in people and less about the ideas (not to mention a flurry of emails discussing all the ideas we have will probably kill our inboxes). This rule is indicated in your welcome email to the group, but I missed putting it in the Community I apologize and will be updating the guidelines to reflect this. If you have further thoughts about improving the community or any suggestions for the Community Guidelines, please respond to me directly.
OPCFW_CODE
<?php require_once("../settings.php"); require_once("../db_conn.php"); // create a table v2_tabela_hits with columns: // data varchar(12) // hits int function increaseHit($dbh) { $data = gmdate("Ymd", time()); $stmt = $dbh->prepare("SELECT * FROM v2_tabela_hits WHERE data = :data"); $stmt->execute([":data" => $data]); if ($stmt->rowCount() == 0) { // add today's date $sql = "INSERT INTO v2_tabela_hits (data, hits) VALUES (:data, 1)"; $stmt = $dbh->prepare($sql); $stmt->execute([":data" => $data]); } else { // increase hit $sql = "SELECT hits FROM v2_tabela_hits WHERE data = :data"; $stmt = $dbh->prepare($sql); $stmt->execute([":data" => $data]); $result = $stmt->fetch(); $newHits = ((int) $result['hits']) + 1; $sql = "UPDATE v2_tabela_hits SET hits = :newHits WHERE data = :data"; $stmt = $dbh->prepare($sql); $stmt->execute([":newHits" => $newHits, ":data" => $data]); } } $page_title = "Tabela de Estoque"; // List directory sorted reverse by time so that we retrieve the newest file $provided_senha = $_POST['senha']; if ($provided_senha == $tabelas_senha) { $tabelas_folder = dirname(__FILE__) . '/tabelas/'; $tabelas_folder_xls = dirname(__FILE__) . '/tabelas_xls/'; $tabelas_folder_htm = dirname(__FILE__) . '/tabelas_htm/'; $files = scandir($tabelas_folder, SCANDIR_SORT_DESCENDING); $files_xls = scandir($tabelas_folder_xls, SCANDIR_SORT_DESCENDING); $files_htm = scandir($tabelas_folder_htm, SCANDIR_SORT_DESCENDING); $newest = $files[0]; $newest_xls = $files_xls[0]; $newest_htm = $files_htm[0]; $content .= "<h3>Tabela de Estoque</h3><br><a href='/tabelas/$newest' data-ajax='false' target='_blank'>Clique aqui para baixar o PDF ($newest) <img src='img/pdf.png'></a>"; $content .= "<br><br><br>"; $content .= "<a href='/tabelas_xls/{$newest_xls}' data-ajax='false' target='_blank'>Clique aqui para baixar a planilha XLS ($newest_xls) <img src='img/xls.png'></a>"; $content .= "<br><br><br>"; $content .= "<a href='/tabelas_htm/{$newest_htm}' data-ajax='false' target='_blank'>Clique aqui para baixar a página HTML ($newest_htm) <img src='img/html.png'></a>"; $content .= "<br><br><span style='font-size: 10px;'>Icones (c) 2009 Teambox Technologies, S.L. MIT Licensed</span>"; // increase hit counter increaseHit($dbh); } else { $content = ' <br><br> Senha incorreta. <a href="tabela.php" data-ajax="false">Tente novamente</a> '; } require_once("template_mapa.php"); ?>
STACK_EDU
nixos/gnupg: use better trick to update the agent TTY Description of changes Long story short: the SSH agent protocol doesn't support telling from which tty the request is coming from, so the the pinentry curses prompt appears on the login tty and messes up the output and may hang. The current trick to workaround this is informing the gnupg agent every time you start a shell: this assumes you will run ssh in the latest tty, if you don't the latest tty will be messed up this time. The ideal solution would be updating the tty exactly when (and where) you run ssh. This is actually possible using a catch-all Match block in ssh_config and using the exec feature that hooks a command to the current shell. Source for the new trick: https://unix.stackexchange.com/a/499133/110465 Things done [x] Tested change (on NixOS 21.05) 22.11 Release Notes (or backporting 22.05 Release notes) [ ] (Package updates) Added a release notes entry if the change is major or breaking [ ] (Module updates) Added a release notes entry if the change is significant [ ] (Module addition) Added a release notes entry if adding a new NixOS module [ ] (Release notes changes) Ran nixos/doc/manual/md-to-db.sh to update generated release notes [x] Fits CONTRIBUTING.md. Any idea how to prevent the following when building on remove builders? wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: failed to create temporary file '/root/.gnupg/.#lk0x0000000001848200.magnesium.1064540': No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: can't connect to the gpg-agent: No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: error sending standard options: No agent running wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: failed to create temporary file '/root/.gnupg/.#lk0x0000000000b95200.magnesium.1064542': No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: can't connect to the gpg-agent: No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: error sending standard options: No agent running wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: failed to create temporary file '/root/.gnupg/.#lk0x0000000001c3f200.magnesium.1064545': No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: can't connect to the gpg-agent: No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: error sending standard options: No agent running wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: failed to create temporary file '/root/.gnupg/.#lk0x000000000128c200.magnesium.1064547': No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: can't connect to the gpg-agent: No such file or directory wezterm-20220903-194523-3bb1ed61-vendor.tar.gz> gpg-connect-agent: error sending standard options: No agent running Are you doing remote builds on a system with programs.gpg.agent.enable = true? Yes. #189711
GITHUB_ARCHIVE
Novel–Dual Cultivation–Dual Cultivation Chapter 731 He’ll Sully Her Pure Body! tedious godly Aunt Charlotte’s Stories of Greek History Nevertheless, as he arrived at halfway on the exit, the doorway opened up, along with a very pretty little girl went in to the room. Mature Zeng began perspiration just after seeing and hearing his thoughts, in which he reported, “I apologize, but my disciple isn’t within the sect now.” At some time after, Su Yang came to Mature Zeng’s room. Su Yang didn’t blame Older person Zeng for not understanding that he was the Alchemy Grasp, but that didn’t signify he won’t have a great time on this predicament. “Very well, it can’t be helped if Luo Yixiao isn’t below. I honestly planned to match her, especially just after all of the positive things I had learned about her.” Su Yang sighed out excessive, sounding such as an conceited younger become an expert in who possessed his excitement crushed. “Deliver Su Yang listed here!” Older Zeng thought to the defense following pondering to get a min. “I shall see you whenever.” “Are you currently certainly? Even I don’t know when she’ll return. It may be months, even weeks, as that’s the length of time medication works may take.” Older Zeng extended doing items up, expecting that it will be enough to fool Su Yang. “Whoa, it requires that extended for the treatments jog? The product you’re concocting need to be an extremely strong one.” Su Yang wanted to stick to the stream and behave as though he believed absolutely nothing about alchemy. “Furthermore, the place is Luo Yixiao? I’m sure I asked for her appearance, very.” Su Yang suddenly reported. “That’s appropriate. Including the simplest tablets can take weeks, but this one… this new product we’re concocting is extremely sophisticated and serious, and also the ingredients instructed to concoct it are extremely scarce, hence why it really is having so much for a longer time.” Senior citizen Zeng spoke having a grin on his encounter soon after believing that he’d had been able to mislead Su Yang. “You should, take a seating, Su Yang. We have already equipped some teas in your case.” Older Zeng gestured into the office chair which has a helpful laugh. “Oh yeah? Where by have she go?” Su Yang inquired having an interested expression on his attractive facial area. Su Yang sipped about the sizzling hot teas a couple of times right before conversing, “Many thanks. Concerning why I am on this page today… you should know.” Several minutes down the road, Older person Zeng sat downwards across from him and spoke, “So, what small business does the Sect Grasp of your Intense Blossom Sect have with the Divine Aspect Backyard garden? Before you response that, allow me to deliver my well done to the Serious Blossom Sect for building an alliance together with the Xie Friends and family, location a precedent initially throughout history.” A handful of occasions afterwards, Senior citizen Zeng sat lower across from him and spoke, “So, what business does the Sect Become an expert in on the Unique Blossom Sect have with these Divine Mother nature Back garden? Just before you reply to that, let me give my best wishes to the Significant Blossom Sect for making an alliance while using Xie Spouse and children, location a precedent initially in the past.” “In addition, exactly where is Luo Yixiao? I’m pretty sure I requested her existence, also.” Su Yang suddenly reported. Older person Zeng started perspiring after listening to his thoughts, and he stated, “I apologize, but my disciple isn’t from the sect at the moment.” Seeing this, Su Yang chuckled inwardly ahead of talking, “That’s fine. I could wait around.” “She… uh… she still left the sect not prolonged ago to get some plants and elements for your new dietary supplement that we’re concocting.” Senior citizen Zeng explained, clearly not a very good liar anyway his speech trembled. dracula sequence – thorne ‘So he doesn’t understand me, huh? Properly, that’s to get estimated as this is his very first time seeing me with this overall look.’ Experiencing this, Su Yang chuckled inwardly right before communicating, “That’s high-quality. I can wait around.” “That’s correct. The most simple drugs can take time, but this one… this new dietary supplement we’re concocting is very sophisticated and unique, and the elements needed to concoct it are certainly rare, thus why it is actually consuming a lot much longer.” Older person Zeng spoke that has a look on his deal with soon after convinced that he’d were able to trick Su Yang. “It needs to be tough being an Alchemy Master. I cannot envision doing nothing throughout the day to simply concoct one particular product. In fact, in the event you didn’t know, I am just a Two Cultivator who needs to shift around— specifically in the bedroom,” Su Yang spoke by using a smile on his experience. “Oh? Exactly where have she go?” Su Yang expected using an interested concept on his attractive experience. A strange grin sprang out on Su Yang’s experience as he sat downward. “That’s appropriate. Even simplest tablets usually takes days or weeks, but this one… this new dietary supplement we’re concocting is really complicated and significant, and the compounds instructed to concoct it are certainly unusual, hence why it really is using a lot lengthier.” Older person Zeng spoke with a smile on his confront immediately after thinking that he’d been able to fool Su Yang. “Then I shall watch you the next occasion.” Topgallantnovel 《Dual Cultivation》 – Chapter 731 He’ll Sully Her Pure Body! crabby zany -p1 Novel–Dual Cultivation–Dual Cultivation
OPCFW_CODE
The fourth version of Symbiotic brings a brand new instrumentation part, which can now instrument the analyzed program with code pieces checking various specification properties. As a consequence, Symbiotic 4 participates for the first time also in categories focused on memory safety. Further, we have ported both Symbiotic and Klee to llvm 3.8 and added new features to the slicer which is now modular and easily extensible. The research was supported by The Czech Science Foundation, grant GA15-17564S. 1 Verification Approach and Software Architecture Symbiotic implements the approach of combining instrumentation, slicing, and symbolic execution to detect errors in C programs. While all the previous releases [2, 5, 7] focus on checking reachability of an error location, Symbiotic 4 can check any property definable by a finite state machine. For example, the finite state machine of Fig. 1 describes the double free error. Intuitively, for every allocated block of memory we create a copy of the state machine that tracks its current status. An error state is reached if the block is deallocated twice. Hence, the instrumentation reduces property checking to unreachability checking as the program violates the property iff the error state is reachable. Creation and tracking of the state machine is performed by code instrumented to the original program. In fact, the brand new instrumentation implemented in Symbiotic works more generally. It gets a JSON file with instrumentation rules. Every rule specifies a function call to be inserted before (or after) each occurrence of a given sequence of instructions. Bodies of called functions are then defined in a separate file written in C. Each instrumentation rule can be refined using an output of a specified static analysis. For example, a code checking NULL dereference does not have to be instrumented to locations where a suitable static analysis guarantees that the corresponding pointer cannot be NULL. For SV-COMP 2017, we have prepared instrumentation rules for checking memory safety properties. For overflow property, we let clang sanitizer to instrument the program. We do not support checking termination property as it cannot be simply translated to reachability analysis. The workflow of Symbiotic 4 is illustrated by Fig. 2. As the first step, we check that the verified property is not termination. Then we translate the analyzed C program to the llvm bitcode by clang. Next, we check that the bitcode contains no calls to pthread_create as neither our slicer, nor Klee can process concurrent programs. If the check is successful, we proceed to the instrumentation of the bitcode. The instrumentation step has two phases. In the first phase, we insert instructions that tell the symbolic executor to treat all memory as symbolic, which allows us to correctly handle uninitialized variables. In the second phase, we perform a static analysis of the bitcode and instrument it as described above. We currently use a points-to analysis when instrumenting memory safety properties to insert property-checking functions only to the locations where the analysis itself does not guarantee that the property holds. The inserted functions call __VERIFIER_error whenever the property is violated. Definitions of the inserted property-checking functions as well as definitions of __VERIFIER_* functions are then linked to the bitcode. Parts of the produced code that have no effect on reaching __VERIFIER_error call sites are consequently removed by slicing. Moreover, code optimizations provided by llvm are used before and after slicing. Before the bitcode is symbolically executed by Klee , we check that it does not contain instructions related to the floating point arithmetic not supported by Klee, e.g. __isnan or __inf. We use our fork of Klee that produces an error witness when a property violation is detected. If Klee reports that __VERIFIER_error is unreachable, we return true and a trivial correctness witness unless Klee warns about not exploring the whole state space. This can happen for example due to limitted support of floating point instructions. In such cases, we return unknown. The slicer has undergone significant changes. Points-to analyses and reaching definitions analysis (needed to build dependency graphs for slicing ) were redesigned into a more general modular framework: Symbiotic now supports more types of analyses that share a common interface and are therefore interchangeable. In particular, the current version of Symbiotic supports both flow-sensitive and flow-insensitive points-to analyses and for both of these analyses, field-sensitive and field-insensitive variants are available. Further, points-to analyses can now precisely handle a larger subset of llvm including memset and memcpy llvm’s intrinsic calls. We have also implemented additional optimizations based on the information about strongly connected components of the program’s control flow graph to speed up the analyses. Note that the redesigned analyses are not firmly integrated into the slicer and can therefore be reused by external tools. The last significant change in Symbiotic 4 is that all components have been ported to llvm 3.8, including the symbolic executor Klee. Finally, we got rid of separate Perl and bash scripts in favor of a concise modular implementation in Python. 2 Strengths and Weaknesses The main strength of the approach is its universality and modularity. Thanks to the instrumentation, Symbiotic now supports almost all checked properties specified by SV-COMP. Authors of other llvm-based verification tools can also benefit from the implemented instrumentation and slicer: the instrumentation can be used to add the ability to verify additional properties such as memory safety to tools that only support reachability and the slicer can be used to remove irrelevant parts of the verified program. The main disadvantage of the current configuration is the high computational cost of symbolic execution for branching-intensive programs. However, thanks to the modular architecture, a suitable software verifier can be in principle used instead of Klee to alleviate this problem. 3 Tool Setup and Configuration Installation: Unpack the archive. The only requirement is python 2.7. Participation Statement: Symbiotic 4 participates in all categories. Execution: Run ./symbiotic OPTS<source>, where available OPTS include: - -64, which sets the environment for 64-bit benchmarks, - -prp=file, which sets the property specification file to use, - -witness=file, which sets the output file for the witness, - -help, which shows the full list of possible options. 4 Software Project and Contributors Symbiotic 4 has been developed by M. Chalupa, M. Vitovská, and J. Slaby with support of M. Jonáš and under supervision of J. Strejček. The tool and its components are available under GNU GPLv2 and MIT Licenses. The project is hosted by the Faculty of Informatics, Masaryk University. llvm, Klee, stp, and MiniSat are also available under open-source licenses. The project web page is: https://github.com/staticafi/symbiotic Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, pp. 209–224. USENIX Association (2008) Chalupa, M., Jonáš, M., Slaby, J., Strejček, J., Vitovská, M.: Symbiotic 3: new slicer and error-witness generation. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 946–949. Springer, Heidelberg (2016). doi:10.1007/978-3-662-49674-9_67 Horwitz, S., Reps, T.W., Binkley, D.: Interprocedural slicing using dependence graphs. ACM Trans. Program. Lang. Syst. 12(1), 26–60 (1990) King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976) Slaby, J., Strejček, J.: Symbiotic 2: more precise slicing. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 415–417. Springer, Heidelberg (2014). doi:10.1007/978-3-642-54862-8_34 Slabý, J., Strejček, J., Trtík, M.: Checking properties described by state machines: on synergy of instrumentation, slicing, and symbolic execution. In: Stoelinga, M., Pinger, R. (eds.) FMICS 2012. LNCS, vol. 7437, pp. 207–221. Springer, Heidelberg (2012). doi:10.1007/978-3-642-32469-7_14 Slaby, J., Strejček, J., Trtík, M.: Symbiotic: synergy of instrumentation, slicing, and symbolic execution. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 630–632. Springer, Heidelberg (2013). doi:10.1007/978-3-642-36742-7_50 Editors and Affiliations Rights and permissions © 2017 Springer-Verlag GmbH Germany About this paper Cite this paper Chalupa, M., Vitovská, M., Jonáš, M., Slaby, J., Strejček, J. (2017). Symbiotic 4: Beyond Reachability. In: Legay, A., Margaria, T. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2017. Lecture Notes in Computer Science(), vol 10206. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54580-5_28 Publisher Name: Springer, Berlin, Heidelberg Print ISBN: 978-3-662-54579-9 Online ISBN: 978-3-662-54580-5 eBook Packages: Computer ScienceComputer Science (R0)
OPCFW_CODE
Java: Contract between checking data and using data Java : Contract between checking data and using data I am developing the UI for a Java desktop application with Swing. There is a "main page" frame that displays many clickable icons for viewing panels. For any given panel, the frame needs a way to check if at least one of the data items the panel will use is set; if so, it enables the button to instantiate the class. If not, it disables the button. Pretty simple, but I have the following constraints that make it difficult: (1) The panel accepts an object in its constructor containing a lot of data, although it only uses some of it, and (2) I am not allowed to instantiate the panel until the user clicks a button to view it. Here are three ideas; A and B violate the constraints, while C works but creates maintenance problems. (A) What I wish I could do is have the panel only accept the data it is expected to use, and instead have the caller be in charge of determining whether that data is set; however, this violates constraint 1. (B) Another option would be to build and cache the panel in advance, and record whether each data element it needs is set in a boolean. Once it's done being constructed, the frame can check that boolean. However, this violates constraint 2. (C) The simplest solution I can think of is to equip the panel with a static method called "isActive" accepting the large data object to check if the data it intends to use is set. If so, then the frame will enable the button, and the panel can be viewed. However, there is no contract between the data checked in the static method and the data actually accessed in the panel. This creates two problems: If the panel is constructed to display some data foo, and foo is the only data set, but I forget to check foo in "isActive," then the panel will be unviewable. If it is later decided that the panel doesn't need to display some data "bar," but "isActive" is still checking if "bar" is set, the panel will be viewable even when "bar" is the only data set, yielding an empty panel. Is there a different solution I could use or a way to modify (C) to make it easier to maintain? It "sounds" like you need some kind of "selection model" which is notified when a icon is selected. Something like a JList might be useful in this situation. If you can't do that, every time a new component is added to your container, you'll need to register a MouseListener (presumably) to it and when clicked, toggle its selected state via the "selection model". The "selection model" can then trigger events which can be used to enable/disable the button Couldn't you have the constructor also call isActive so they always check the same things?
STACK_EXCHANGE
[Linux] use backing root in dehydrate functional tests A number of tests in the DehydrateTests class fail on systems such as Linux where the virtualized repository's contents are not available at the mount location after the repository is unmounted. This PR adjusts these tests so they refer instead to the backing root location anytime file paths are referenced after the repository is unmounted. We also create a CheckDehydratedFolderAfterUnmount() helper method in DehydrateTests which encapsulates the different checks required on Windows and POSIX platforms. This PR is probably most easily reviewed commit-by-commit (although the overall diff didn't turn out too bad either). There is a whitespace-only commit (ce7ec2e3f5) for which I apologize, but it was just going to be a real headache trying to not to introduce additional missing CRs while refactoring, so I left that commit in to make the whole file consistent first. Two particular commits I should note for extra scrutiny: first, in d19716138e, I pulled out the FolderDehydrateCreatedDirectoryParentFolderInModifiedPathsShouldOutputMessage() test as it appeared to me that since 7956bd95b3 it was now identical to the earlier FolderDehydrateParentFolderInModifiedPathsShouldOutputMessage() test. And second, in a0264d6af5, I removed one call to GetVirtualPathTo() because as far as I could tell, it was being called an extra time when constructing the path to pass the fileToWrite to AppendAllText() in the FolderDehydrateFolderThatIsSubstringOfExistingFolder() test. The current code in master looks like this: string fileToWrite = this.Enlistment.GetVirtualPathTo(Path.Combine(folderToDehydrate, "App.config")); this.fileSystem.ReadAllText(fileToRead); this.fileSystem.AppendAllText(this.Enlistment.GetVirtualPathTo(fileToWrite), "Append content"); and ignoring the middle line, I just didn't see how this could be what was intended. /cc @kewillford /azp run GitHub VFSForGit Mac Functional Tests /azp run GitHub VFSForGit Mac Functional Tests /azp run GitHub VFSForGit Mac Functional Tests /azp run GitHub VFSForGit Mac Functional Tests /azp run GitHub VFSForGit Mac Functional Tests Hi @wilbaker -- I believe this PR is ready for review, however, assiduous clicking on "Re-run failed tests" buttons has failed to get the Mac CI tests past the build stage; they both now seem to fail when trying to sign the kext. Would you by any chance have a magic fix for that? Thanks very much and cheers! /azp run PR - Mac - Build and Unit Test @chrisd8088 the PR - Mac - Build and Unit Test test is no longer required as there are some issues with the upgrade to Catalina (so don't worry about that failing). The GitHub VFSForGit Mac Functional Tests build still runs the unit tests there so we still have coverage. I will take a look through the changes, thanks! Awesome, thank you! Hi again @wilbaker ... just wondering if you had any thoughts on this PR. I know there's a lot of renaming, but I hope it's all in a good cause (i.e., the final commit in the sequence, dfecd1f3aa8, is as clean as possible). I should note again what's in the description, though, as @kewillford may also want to take a look at these: Two particular commits I should note for extra scrutiny: first, in d19716138e, I pulled out the FolderDehydrateCreatedDirectoryParentFolderInModifiedPathsShouldOutputMessage() test as it appeared to me that since 7956bd95b3 it was now identical to the earlier FolderDehydrateParentFolderInModifiedPathsShouldOutputMessage() test. And second, in a0264d6af5, I removed one call to GetVirtualPathTo() because as far as I could tell, it was being called an extra time when constructing the path to pass the fileToWrite to AppendAllText() in the FolderDehydrateFolderThatIsSubstringOfExistingFolder() test. The current code in master looks like this: string fileToWrite = this.Enlistment.GetVirtualPathTo(Path.Combine(folderToDehydrate, "App.config")); this.fileSystem.ReadAllText(fileToRead); this.fileSystem.AppendAllText(this.Enlistment.GetVirtualPathTo(fileToWrite), "Append content"); and ignoring the middle line, I just didn't see how this could be what was intended. Hi again @wilbaker ... just wondering if you had any thoughts on this PR @chrisd8088 apologies for the delay on this PR. It's on my radar and I plan to review the changes. Awesome, thanks, @wilbaker. I know you're dealing with a bunch of sparse-related stuff right now too. /azp run PR - Mac - Build and Unit Test Hi @wilbaker (and @kewillford) -- I happened to notice that you might be starting on some new functional tests for dehydration in #1605, and I wondered if there was any chance I could persuade you to look over this PR before adding new dehydration tests, just because I'm selfishly hoping to avoid refactoring it again. :-) And I suppose I could argue that the effect of these changes is a net gain in readability ... I hoped for that, but may have introduced more verbosity than you desire. Let me know what you think, and thanks as always! Thanks for all the comments! I'll try to work through these shortly -- much appreciated! @wilbaker, @kewillford -- thanks for the close review and the suggestions! I think I'm implemented them more-or-less as you intended and it does indeed, I think, improve the readability a lot. Let me know what you think; this would clear up the last group of functional test failures on Linux not related to low-level implementation stuff which needs to be addressed in libprojfs. Thanks very much! I'll merge in a moment! ❤️
GITHUB_ARCHIVE
Gutenberg generator, 2021: A page that consumes Project Gutenberg text files and creates an interactable 3D representation, with appropriate book cover and size. The plausible page-turning and opportunity for persistent visual landmarks help make it clear that while eBooks are of great benefit for the storage and sharing of documents, the human experience of making… Continue reading Gutenberg generator YouTube Wordle generator, 2021: A bookmarklet that creates a (not yet VR) word cloud of a YouTube video’s transcript. This can be a useful artifact to understand at a glance what a video is likely to contain, but is also supposed to demonstrate that a video often has informational content with as much validity to… Continue reading YouTube Wordle generator Leap Text Editor, 2019: A writing system powered by speech recognition and hand tracking for hand-pose detection. One issue with VR, and a stationary computing context is the loss of the keyboard – for both typing and for hotkeys. By encoding functionality in specific hand poses – and providing feedback about pose-distance, a system teaches… Continue reading Leap Text Editor Timeline VR, 2019: A timeline-centered view of Wikipedia, allowing users to place multiple timelines next to one another. Useful in a similar way to books like Timetables of History, but with the ability to compose wider sets of data see longer patterns than can be shown on a single page. Word Reality, 2017: A first attempt at a space for creating and assembling information. I wanted to explore what breaking the contents of a page apart might allow the different pieces to mean, and how we might relate to the collection of information that ultimately constitutes a document if we aren’t always staring at a… Continue reading Word Reality Worth a quick read I think : https://appleinsider.com/articles/22/01/12/apple-is-late-to-ar-but-its-going-to-succeed-the-way-it-always-does Game engines driving VR: https://www.forbes.com/sites/timbajarin/2022/02/01/what-will-be-a-foundational-technology-for-creating-metaverse-apps/?fbclid=IwAR0woZQ0gbR3e94MkvuuI5jTI39ziYuIvR1B34WxlmRr4WfBlrYvaBctlA0&sh=4940a13d3f78 I think this will have real repercussions for VR: https://youtu.be/V7qhOo_jR2Y glTF (derivative short form of Graphics Language Transmission Format or GL Transmission Format) is a standard file format for three-dimensional scenes and models. Neat exploration into redesigning eBooks. Solves many of the issues lacking from reading a physical book. https://bureau.rocks/books/manifesto/ h/t Andy Matuschak
OPCFW_CODE
Create an Account This article discusses design, prototype development and a simulation study of novel types of facade systems, which integrate thermoelectric (TE) materials. TEs are smart materials that have the ability to produce a temperature gradient when electricity is applied, exploiting the Peltier effect, or to generate a voltage when exposed to a temperature gradient, utilizing the Seebeck effect. TEs can be used for heating, cooling, or power generation. In this research, heating and cooling potentials of these novel systems were explored for commercial office space. Initially, two low fidelity prototypes were designed, constructed and experimentally tested to investigate heating and cooling potentials. Results, which have been previously published, indicated that these novel facade systems would operate well in heating and cooling modes under varying exterior environmental conditions. In this study, the research was extended to include simulations and modeling. A typical commercial office space was used in the simulation study to investigate heating and cooling capabilities of thermoelectric facades. In the simulation model, a single office space was modeled with an exterior wall consisting of a thermoelectric facade, and interior walls as adiabatic partition walls. Computational Fluid Dynamics (CFD) simulations were conducted for different scenarios, using SOLIDWORKS software program, varying the exterior environmental conditions (0°F, 30°F, 60°F and 90°F) and percentage of wall coverage with thermoelectric components (5%, 10%, 15% and 20%). Simulations were conducted to calculate temperature distribution within the interior space for these different scenarios, and to determine heating and cooling outputs. This paper reviews the results in detail. High demand for energy used for lighting, heating, ventilation, and air conditioning leads to significant amount of carbon dioxide emissions. According to the U.S. Department of Energy, 15% of global Few applications of TEMs in facade assemblies have been researched, proposed, or constructed. This has created a significant gap in knowledge in the potential architectural applications of TEMs. Some researchers A series of Computational Fluid Dynamics (CFD) simulations were used to investigate the research questions, under varying exterior environmental conditions (0°, 30°, 60°, and 90°F), similar to the previous experimental Results for all simulated cases were collected, tabulated, and graphed for analysis. Temperatures were recorded for two different aspects: surface temperature of the interior heat sink and temperature distribution within Results of the 20 simulated scenarios indicated that 15% TE coverage was the optimum percentage where highest performance was reached. Other scenarios were not as effective since they generated less Simulation results indicated that TE materials are promising intelligent components that can be used in facade assemblies for heating and cooling purposes, controlling buildings’ interior environment. This is an independent Aksamija, A., Aksamija, Z., Counihan, C., Brown, D., and Upadhyaya, M. “Experimental study of operating conditions and integration of thermoelectric materials in facade systems.” Frontiers in Energy Research, Special Issue on New Materials and Design of the Building Enclosure 7 (2019), Article 6, DOI: 10.3389/fenrg.2019.00006. Aksamija, A., Aksamija, Z., Counihan, C., Brown, D., and Upadhyaya, M. “Thermoelectric materials in exterior walls: experimental study on using smart facades for heating and cooling in high-performance buildings.” Proceedings of the Facade World Congress (2018): 171-180. Bell, L. E. “Cooling, heating, generating power, and recovering waste heat with thermoelectric systems.” Science 321 (2008): 1457-1461. Department of Energy. “Building Energy Data Book 2011.” https://openei.org/doe-opendat... (accessed April 30, 2019). Liu, Z. B., Zhang, L., Gong, G., and Luo, Y. “Evaluation of a prototype active solar thermoelectric radiant wall system in winter conditions.” Applied Thermal Engineering 89 (2015): 36-43. Montecucco, A., Buckle, J. R., and Knox, A. R. “Solution to the 1-D unsteady heat conduction equation with internal Joule heat generation for thermoelectric devices.” Applied Thermal Engineering 35 (2012): 177–184. Seetawan, T., Singsoog, K., and Srichai, K. “Thermoelectric energy conversion of p-Ca3Co4O9/n-CaMnO3 module.” Proceedings of the 6th International Conference on Applied Energy (2014): 2–5. Snyder, G.. and Toberer, E. “Complex thermoelectric materials.” Nature Materials 7 (2008): 105-114. Twaha, S., Zhu, J., Yan, Y., and Li, B. “A comprehensive review of thermoelectric technology: Materials, applications, modelling and performance improvement.” Renewable and Sustainable Energy Reviews 65 (2016): 698-726. Yilmazoglu, M. “Experimental and numerical investigation of a prototype thermoelectric heating and cooling unit.” Energy and Buildings 113 (2016): 51-60. Zhao D., and Tan G. “A review of thermoelectric cooling: Materials, modeling and applications.” Applied Thermal Engineering 66 (2014): 15-24. Zheng, X.F., Liu, C. X., Yan, Y. Y., and Wang, Q. “A review of thermoelectrics research - recent developments and potentials for sustainable and renewable energy applications.” Renewable Sustainable Energy 32 (2014): 486–503.
OPCFW_CODE
ReentrantReadWriteLock lock upgrade method I have a question about lock upgrading.Specifically what bothers me is between readlock.unlock() and following writelock.lock()... I am providing a nearly complete implementation for a sample cache.I just omitted actual loading of cached data from database. I appreciate if you can review the code and share your thoughts. I tried to express my concern in the java comments . My question is how to correctly synchronize in order to avoid rechecking the cache in load cache. import java.util.*; import java.util.concurrent.*; import java.util.concurrent.locks.*; public class JavaRanchSampleCache { private ConcurrentHashMap<String, ReentrantReadWriteLock> refreshLocks = new ConcurrentHashMap<String, ReentrantReadWriteLock>(); private ConcurrentHashMap<String, HashMap<String, String>> cacheData = new ConcurrentHashMap<String, HashMap<String, String>> (); public HashMap<String, String> getCachedData(String cacheKey) { ReentrantReadWriteLock lock = refreshLocks.get(cacheKey); if(lock==null) { lock=new ReentrantReadWriteLock(); ReentrantReadWriteLock previous=refreshLocks.putIfAbsent(cacheKey, lock); if(previous!=null)//null means no previous lock object,first time usage lock=previous; } //we have a safe lock at this point try { lock.readLock().lock();//read the cached item for correspoding cacheKey HashMap<String, String> cachedItem=cacheData.get(cacheKey); if(cachedItem==null)//it is not cached yet.Load to cache on first request { cachedItem=loadItemExpensive(cacheKey); } return cachedItem;//return the cached item. } finally { lock.readLock().unlock(); } } private HashMap<String, String> loadItemExpensive(String cahceKey) { ReentrantReadWriteLock lock = this.refreshLocks.get(cahceKey); HashMap<String, String> cachedItem=null; try { /*these two lines are for lock upgrading * BUT what happens if another thread interacts between line1 and line2 * I mean after the readlock is relased,some OTHER thread might gain the writelock * before this thread and perform loaddata and update cache operation. * So should I check the cache once more to see if some other thread * has updated the cache? I 'feel' that I should,but it is ugly,i need some clever way. */ lock.readLock().unlock();//line1 lock.writeLock().lock();//line2 /*omitted load data from some expensive store such as database or remote server*/ return cachedItem; } finally { lock.readLock().lock();//writelock owner can grant readlock immediately. lock.writeLock().unlock();//lock released } } } From the javadoc: "Reentrancy also allows downgrading from the write lock to a read lock, by acquiring the write lock, then the read lock and then releasing the write lock. However, upgrading from a read lock to the write lock is not possible." So you need to find a way... Your fears are valid. Once you let go of the lock, all bets are off. When you get the write lock, you will need to check if another thread has mutated the cache. Don't think of it as "ugly"; think of it as "correct". Continue to use comments to explain what you're doing and why you're doing it. In a nutshell, a read/write lock can't support both upgrading and downgrading. Multiple locks must always be acquired in the same order (in this case, write then read), otherwise a deadly embrace can occur. So when going the other way (read then write), you must let go of the lock, at which point any decisions you made based on current state must be reconsidered. But there are other options, depending on the details of your situation. One is to always a get a write lock in anticipation of needing to make a modification and downgrading to a read lock if the data's already there. This is probably not what you want, especially if the majority of operations are read-only, as it will lead to lock contention and poor response times under load. Other options might involve allowing multiple threads to (redundantly) perform the expensive load, and acquiring the write lock just at the moment of insertion to the table. Or it might be appropriate to just blindly overwrite the existing entry (in this last case, all you would really need is ConcurrentHashMap) . It all depends on the scenario what you're willing to tolerate. Hope this helps. This is an old question, but here's both a solution to the problem, and some background information. If you need to safely acquire a write lock without first releasing a read lock, take a look at a different type of lock instead, a read-write-update lock. I've written a ReentrantReadWrite_Update_Lock, and released it as open source under an Apache 2.0 license here. I also posted details of the approach to the JSR166 concurrency-interest mailing list, and the approach survived some back and forth scrutiny by members on that list. The approach is pretty simple, and as I mentioned on concurrency-interest, the idea is not entirely new as it was discussed on the Linux kernel mailing list at least as far back as the year 2000. Also the .Net platform's ReaderWriterLockSlim supports lock upgrade also. So effectively this concept had simply not been implemented on Java (AFAICT) until now. The idea is to provide an update lock in addition to the read lock and the write lock. An update lock is an intermediate type of lock between a read lock and a write lock. Like the write lock, only one thread can acquire an update lock at a time. But like a read lock, it allows read access to the thread which holds it, and concurrently to other threads which hold regular read locks. The key feature is that the update lock can be upgraded from its read-only status, to a write lock, and this is not susceptible to deadlock because only one thread can hold an update lock and be in a position to upgrade at a time. This supports lock upgrade, and furthermore it is more efficient than a conventional readers-writer lock in applications with read-before-write access patterns, because it blocks reading threads for shorter periods of time. Example usage is provided on the site. The library has 100% test coverage and is in Maven central. Note there is a similar question here.
STACK_EXCHANGE
# %load ex3.py import scipy as sc import scipy.linalg as linalg import math as m from generate_data import read_data import matplotlib matplotlib.use('Agg') from pylab import * from matplotlib.pyplot import * from matplotlib import rc from matplotlib.pyplot import title as pytitle from matplotlib.patches import Ellipse import bovy_plot as plot def ex3(exclude=sc.array([1,2,3,4]),plotfilename='ex3.png', bovyprintargs={}): """ex3: solve exercise 3 Input: exclude - ID numbers to exclude from the analysis plotfilename - filename for the output plot Output: plot History: 2009-05-27 - Written - Bovy (NYU) """ #Read the data data= read_data('data_yerr.dat') ndata= len(data) nsample= ndata- len(exclude) #Put the dat in the appropriate arrays and matrices Y= sc.zeros(nsample) A= sc.ones((nsample,3)) C= sc.zeros((nsample,nsample)) yerr= sc.zeros(nsample) jj= 0 for ii in range(ndata): if sc.any(exclude == data[ii][0]): pass else: Y[jj]= data[ii][1][1] A[jj,1]= data[ii][1][0] A[jj,2]= data[ii][1][0]**2. C[jj,jj]= data[ii][2]**2. yerr[jj]= data[ii][2] jj= jj+1 #Now compute the best fit and the uncertainties bestfit= sc.dot(linalg.inv(C),Y.T) bestfit= sc.dot(A.T,bestfit) bestfitvar= sc.dot(linalg.inv(C),A) bestfitvar= sc.dot(A.T,bestfitvar) bestfitvar= linalg.inv(bestfitvar) bestfit= sc.dot(bestfitvar,bestfit) #Now plot the solution plot.bovy_print(**bovyprintargs) #plot bestfit xrange=[0,300] yrange=[0,700] nsamples= 1001 xs= sc.linspace(xrange[0],xrange[1],nsamples) ys= sc.zeros(nsamples) for ii in range(nsamples): ys[ii]= bestfit[0]+bestfit[1]*xs[ii]+bestfit[2]*xs[ii]**2. plot.bovy_plot(xs,ys,'k-',xrange=xrange,yrange=yrange, xlabel=r'$x$',ylabel=r'$y$',zorder=2) #Plot data errorbar(A[:,1],Y,yerr,marker='o',color='k',linestyle='None',zorder=1) #Put in a label with the best fit text(5,30,r'$y = ('+'%4.4f \pm %4.4f)\,x^2 + ( %4.2f \pm %4.2f )\,x+ ( %4.0f\pm %4.0f' % (bestfit[2], m.sqrt(bestfitvar[2,2]),bestfit[1], m.sqrt(bestfitvar[1,1]), bestfit[0],m.sqrt(bestfitvar[0,0]))+r')$') plot.bovy_end_print(plotfilename) return 0 # run exercise ex3()
STACK_EDU
What is CAPTCHA CAPTCHA stands for the Completely Automated Public Turing test to tell Computers and Humans Apart. CAPTCHAs are tools you can use to differentiate between real users and automated users, such as bots. CAPTCHAs provide challenges that are difficult for computers to perform but relatively easy for humans. For example, identifying stretched letters or numbers, or clicking in a specific area. What are CAPTCHAs Used for CAPTCHAs are used by any website that wishes to restrict usage by bots. Specific uses include: - Maintaining poll accuracy—CAPTCHAs can prevent poll skewing by ensuring that each vote is entered by a human. Although this does not limit the overall number of votes that can be made, it makes the time required for each vote longer, discouraging multiple votes. - Limiting registration for services—services can use CAPTCHAs to prevent bots from spamming registration systems to create fake accounts. Restricting account creation prevents waste of a service’s resources and reduces opportunities for fraud. - Preventing ticket inflation—ticketing systems can use CAPTCHA to limit scalpers from purchasing large numbers of tickets for resale. It can also be used to prevent false registrations to free events. - Preventing false comments—CAPTCHAs can prevent bots from spamming message boards, contact forms, or review sites. The extra step required by a CAPTCHA can also play a role in reducing online harassment through inconvenience. How Does CAPTCHA Work CAPTCHAs work by providing information to a user for interpretation. Traditional CAPTCHAs provided distorted or overlapping letters and numbers that a user then has to submit via a form field. The distortion of the letters made it difficult for bots to interpret the text and prevented access until the characters were verified. This CAPTCHA type relies on a human’s ability to generalize and recognize novel patterns based on variable past experience. In contrast, bots can often only follow set patterns or input randomized characters. This limitation makes it unlikely that bots will correctly guess the right combination. Since CAPTCHA was introduced, bots that use machine learning have been developed. These bots are better able to identify traditional CAPTCHAs with algorithms trained in pattern recognition. Due to this development, newer CAPTCHA methods are based on more complex tests. For example, reCAPTCHA requires clicking in a specific area and waiting until a timer runs out. Drawbacks of Using CAPTCHA The overwhelming benefit of CAPTCHA is that it is highly effective against all but the most sophisticated bad bots. However, CAPTCHA mechanisms can negatively affect the user experience on your website: - Disruptive and frustrating for users - May be difficult to understand or use for some audiences - Some CAPTCHA types do not support all browsers - Some CAPTCHA types are not accessible to users who view a website using screen readers or assistive devices CAPTCHA Types: Examples Modern CAPTCHAs fall into three main categories—text-based, image-based, and audio. Text-based CAPTCHAs are the original way in which humans were verified. These CAPTCHAs can use known words or phrases, or random combinations of digits and letters. Some text-based CAPTCHAs also include variations in capitalization. The CAPTCHA presents these characters in a way that is alienated and requires interpretation. Alienation can involve scaling, rotation, distorting characters. It can also involve overlapping characters with graphic elements such as color, background noise, lines, arcs, or dots. This alienation provides protection against bots with insufficient text recognition algorithms but can also be difficult for humans to interpret. Techniques for creating text-based CAPTCHAs include: - Gimpy—chooses an arbitrary number of words from an 850-word dictionary and provides those words in a distorted fashion. - EZ-Gimpy—is a variation of Gimpy that uses only one word. - Gimpy-r—selects random letters, then distorts and adds background noise to characters. - Simard’s HIP—selects random letters and numbers, then distorts characters with arcs and colors. Image-based CAPTCHAs were developed to replace text-based ones. These CAPTCHAs use recognizable graphical elements, such as photos of animals, shapes, or scenes. Typically, image-based CAPTCHAs require users to select images matching a theme or to identify images that don’t fit. You can see an example of this type of CAPTCHA below. Note that it defines the theme using an image instead of text. Image-based CAPTCHAs are typically easier for humans to interpret than text-based. However, these tools present distinct accessibility issues for visually impaired users. For bots, image-based CAPTCHAs are more difficult than text to interpret because these tools require both image recognition and semantic classification. Audio CAPTCHAs were developed as an alternative that grants accessibility to visually impaired users. These CAPTCHAs are often used in combination with text or image-based CAPTCHAs. Audio CAPTCHAs present an audio recording of a series of letters or numbers which a user then enters. These CAPTCHAs rely on bots not being able to distinguish relevant characters from background noise. Like text-based CAPTCHAs, these tools can be difficult for humans to interpret as well as for bots. Math or Word Problems Some CAPTCHA mechanisms ask users to solve a simple mathematical problem such as “3+4” or “18-3”. The assumption is that a bot will find it difficult to identify the question and devise a response. Another variant is a word problem, asking the user to type the missing word in a sentence, or complete a sequence of several related terms. These types of problems are accessible to vision impaired users, but at the same time they may be easier for bad bots to solve. Social Media Sign In A popular alternative to CAPTCHA is requiring users to sign in using a social profile such as Facebook, Google or LinkedIn. The user’s details will be automatically filled in using single sign on (SSO) functionality provided by the social media website. This is still disruptive, but may actually be easier for the user to complete than other forms of CAPTCHA. An additional benefit is that it is a convenient registration mechanism. No CAPTCHA ReCAPTCHA This type of CAPTCHA, known for its use by Google, is much easier for users than most other types. It provides a checkbox saying “I am not a robot” which users need to select – and that’s all. It works by tracking user movements and identifying if the click and other user activity on the page resembles human activity or a bot. If the test fails, reCAPTCHA provides a traditional image selection CAPTCHA, but in most cases the checkbox test suffices to validate the user. Imperva Bot Detection: CAPTCHA as a Last Line of Defense Imperva provides the option to deploy CAPTCHAs, but uses it as the final line of defense, if all other bot identification mechanisms fail. This means it will be used for a very small percentage of user traffic. Imperva does provide the option to manually enforce CAPTCHA, for websites that need a stricter approach to advanced bot protection. In addition to providing bad bot mitigation, Imperva provides multi-layered protection to make sure websites and applications are available, easily accessible and safe. The Imperva application security solution includes: - DDoS Protection—maintain uptime in all situations. Prevent any type of DDoS attack, of any size, from preventing access to your website and network infrastructure. - CDN—enhance website performance and reduce bandwidth costs with a CDN designed for developers. Cache static resources at the edge while accelerating APIs and dynamic websites. - Cloud WAF—permit legitimate traffic and prevent bad traffic. Safeguard your applications at the edge with an enterprise‑class cloud WAF. - Gateway WAF—keep applications and APIs inside your network safe with Imperva Gateway WAF. - RASP—keep your applications safe from within against known and zero‑day attacks. Fast and accurate protection with no signature or learning mode.
OPCFW_CODE
Finalize Portfolio Really, this looks like more than it actually is :smile: The first commit just changes indentation and moves the duplicate code into capture blocks, so I actually didn't change anything, but GitHub recognizes it as new code :wink: You did a great job! :+1: There wasn't much to do. So, what did I actually change... As said, I've moved duplicate code into capture blocks. The button text now defaults to "Download" and I've removed both include.defaultThumbnail (unused) and item.description (you're already using item.content). The meta data is now aligned to the content (the only visible change). The original layout is now called jumbovision (like those gigantic televisions), the "columns" layout is called card (like business cards). You can change the layout on a per-item or per-portfolio basis (pico_edit currently uses the card layout; what do you think about that?). There's now a defaultImage option with which you can overwrite what image is being shown when the item doesn't specify any images (like most plugins). There's no fancybox popup for defaultImage on purpose. And the stylesheet is now responsive and works with IE 9+. IE 8 is broken for some reason - and I've absolutely no idea why... I'll do more research... Live Demo: http://phrozenbyte.github.io/Pico/plugins/ Backreference: picocms/Pico#358 jumbovision :laughing: :+1: So, as usual, I noted an odd, hard to reproduce bug. Probably so small it can be overlooked. I couldn't even get it to occur predictably. Actually, upon further investigation it wasn't your changes, since it happens in my version. If you accidentally double-click on a thumbnail, the content area stays open, but the page layout goes back to normal. Also, the content area's image will revert back to its full size while the content area is closing in this broken state. This part is likely due to a CSS class disappearing at an unplanned point. The image thing was actually what caused me to notice it though. This bug probably came from something in #4 / #5.* While it doesn't occur in the original implementation, I was able to make parts of their page disappear completely with some double-clicking hijinks. *(There was a small section of Javascript I commented out before though. I'll make sure it didn't have anything to do with this.) Ours only seems to be a visual bug. It fixes itself the next time you open or close the content area. Their original implementation broke so much that it required a page refresh to fix. :laughing: Anyway... just wanted to point that out. It's probably not worth fixing unless it's like a five second fix. Back on topic, I'll look through your changes soon, but what you've described sounds good. :+1: test test test test lol, you're never satisfied. :wink: test test test test lol, you're never satisfied. :wink: We're ready to roll... I did some kind of a "rage code refactoring" :laughing: With the previous mess of a code it was nearly impossible to track any bugs down (unused code, conflicting code, duplicate code...), this shouldn't be a problem anymore. Works fine with IE8+ and on mobile devices (what btw wasn't the case before, just try it yourself, open the Webpaint demo page with a mobile browser and scroll up so that the navigation bar appears - what a great UX :laughing:). This is still far away from being perfect, normally I wouldn't use jQuery for this (I've to due to Isotope), because jQuery fucks up the stylesheet pretty much and makes it impossible to actually recover the original/"clean" state (what is a prerequisite to e.g. support aborting the close animation). So if you want to open another item, the previous item is first fully closed (and user interaction is ignored in the meantime) before actually opening the new item. In "Opera Mobile", the mobile layout isn't triggering properly anymore. I've also tried the mobile versions of both Firefox and Chrome, and they work fine. For whatever reason, the mobile layout is only partially triggering. The background colors on the header and footer are shrunken (and likely their entire elements), but the rest of the layout isn't rearranging. This could just be a bug in Opera. It's built on a slightly older version of Chromium. But still, it worked before, and still works correctly on my version of the site. shrug Does this happen on all pages or just on the plugins/themes page? It was happening on all the pages, and on Opera Mobile, not Mini. I have no idea what's causing it for me. I backed up and wiped my Opera browser's data and it works fine now (even once I restored my data). Must have been some weird cache issue or something. :sweat_smile: It's hard to figure that out on mobile. :unamused: It's hard to figure that out on mobile. :unamused: Depends. Chrome, Firefox and Opera all provide remote debugging techniques which we could use if there's an issue. Anyway, they seem to work (and it's quite unusual that mobile doesn't work when their desktop-counterparts work). The only browsers which don't work are Dolphin and UC Browser (with Dolphin you can open and close the detail view at least once, UC Browser doesn't work at all). However, neither Dolphin nor UC Browser provide any debugging technique, so it's impossible for us to figure out what's going on. And they don't have a reasonable market share anyway... So... Who cares :laughing: No, we only care about real browsers. :wink: .... which is why I'm always negligent of IE. :laughing: I know they have remote debugging options, but, for example, there's no simple way to hard reload a page. I probably should have tried clearing the cache, etc, before I commented. I just really didn't think I'd have anything cached for your github.io in that browser to worry about. Also, since the big-name mobile browsers typically run on the same engine as their desktop counterparts, I'd usually just use their desktop-based responsive design views. This issue was occurring only on that browser install though. :confounded: :open_mouth: That's an impressive amount of JavaScript refactoring. (Not that I've tried to understand any of it. :unamused: ) So I've finally finished reading through the changes (the regular ones). Looks a lot cleaner than before. :+1: Now to make that thumbnail image so they aren't all using the screenshot of the default theme. :laughing: Any last minute changes / decisions? Or should I go ahead and merge it? That's an impressive amount of JavaScript refactoring. Rage refactoring :laughing: Any last minute changes / decisions? Or should I go ahead and merge it? Nope, go ahead :smiley: :+1:
GITHUB_ARCHIVE
European clocks slowed by lag in continent's electricity grid Countless Europeans who arrived late to operate or college have a very good justification _ an unparalleled slowing with the frequency on the continent's electric power... Feel free to test, but in my/our encounter, titles that stability search phrase use and driving the searcher to want to interact possess the best results. I concur with you, Martin. Around the one particular hand It really is superior to possess a domain title that includes / refers to your small business title so that individuals would realize it's you and would simply click when they see, but On the flip side I noticed an increase in generic search phrases area names that "steal" our targeted traffic and clicks just because they include these keywords within their area identify. If you have been worker on the thirty day period, salesperson with the 12 months, or acquired other recognition out of your employer, a client or shopper, or your occupation or market, make sure you consist of them. Dutch intel agency: Quantity, complexity of cyberattacks rises The Netherlands' main intelligence agency states it truly is observing a rise in the volume and complexity of attempts at digital espionage, on the net... In the event of strategic link bait, evergreen information like ways to, controversies generally help but this also will depend on which business you belong to. I'm sure the feedback for this WBF can be superior and insightful. Lacking from the listing: key word density, textual content to html ratio, and obsessing about pagespeed. Effective in find markets/verticals in which you might not get caught (but helpful site risky): PBN This continues to be genuinely insightful. So Using these eight, with any luck , you'll switch from some old style Search engine optimisation techniques that don't work so properly to some new means of believing that will choose your Website positioning effects to a fantastic location. And with that, we'll see you all over again next 7 days for another version of Whiteboard Friday. Consider care. Google's homepage includes a button labeled "I'm Experience Blessed". Each time a user clicks within the button the consumer might be taken on to the primary search result, bypassing the search engine benefits page. The believed is always that if a person is "feeling lucky", the search engine will return an ideal match The 1st time without needing to webpage throughout the search results. These listings nonetheless are sometimes fully Home Page incorrect and there is no solution to talk to Google to proper them; for instance, on 25 July, for your El Capitan Theatre, google showtimes lists Up but in accordance with the El Capitan website, the sole Film taking part in that day is G-Pressure.[source?] Huh. We have More about the author noticed the alternative. Do you've any facts at scale or studies which have shared that? All I have viewed these days are the studies demonstrating how manufacturer influences CTR, conversion, and retention positively. To produce this thorough record, Google initially has to eliminate all the duplicate listings that businesses post to all of these task sites. Then, its machine Studying-qualified algorithms sift by way of and categorize them. These occupation internet sites often presently use at least some position-certain markup to assist search engines know that something is usually a occupation submitting (even though usually, the sort of search engine optimization that labored when Google would only display ten blue hyperlinks for this kind of question now clutters up the new interface with long, highly thorough work titles, one example is). Who definitely understands what Google wants other than Google. I agree with you on These types of points but some of these techniques do nevertheless function. You will need to implement them appropriately, and this truth relies on the idea that undertaking usually wouldn't make anyone any revenue. Florida python uncovered with deer inside it Researchers in Florida say they have found an eleven-foot-long invasive Burmese python that experienced eaten a deer that weighed greater than the snake
OPCFW_CODE
I've searched for answers to this but unable to find my answer. I know that we need to update the registry so that it picks the right driver. However, when I run nicq.exe, it get the following output: Test 1: Check Driver Status Driver is properly installed. SPCD Driver service is not running. Attempting to start it... Unable to start driver. Test 1: FAILURE - Terminating Tests I verified the path in the registry is correct but it won't start it. Any thoughts? Brad - Could you please provide more details. Are you not able to monitor the agents who are logged using windows 7 64bit or are you not able to monitor other agents when you are logged in to Windows 7 64 bit or the reports are not getting updated when logged in to Windows 7 64 bit. Can you have a look into the location C:\Windows\SysWOW64\drivers\ for the file spcd.sys. and then copy that file to the location C:\Windows\System32\drivers After doing the above try the nicq.exe We are unable to silent monitor agents with the supervisor software on Windows 7 64bit but can all 32 bit windows machines. We have verified everything but it seems to be the driver. We changed the registry to try to make it use the Calabrio packet capture driver but when we run nicq.exe, we get the error that it won't start. I did try moving spcd.sys from C:\Windows\SysWOW64\drivers to C:\Windows\System32\drivers changed the registry back but it still won't open the driver when running nicq.exe. Is there another install package for CAD that could fix this? I beleive you have already confirmed that the right NIC is chosen. I beleive you have no more than one NIC. If yes, it's required to choose the right NIC/IP address. Hi Brad, Were you able to find a solution to this issue? We are having the exact same problem and so far, no luck in fixing it. Thanks We did not find a solution to this. We are just waiting for our 3rd party voice recording solution to be installed. Let me know if you figure it out. We have solved the monitoring problem in our environment. YMMV, no promises, and it might break something. But this seems to work for us. Our specs: CAD 8.0(2) Build 184.108.40.2060 running on Windows 7 Enterprise 64-bit. Get a copy of the UUCX upgrade to 8.5.1 ISO. We used UCSInstall_UCCX_8_5_1_UCOS_220.127.116.1104-25.sgn.iso so that’s all we can confirm as working. Go to \installer\Agent\ and extract Cisco Supervisor Agent.msi with an MsiX extraction method. Don’t use an Administrative install switch with MSIexec as you will get the wrong files. We used Universal Extractor v1.6.1 In that extraction folder you should have a data1.cab, extract that. We used winrar. In that extracted folder find spcd.sys7.E5574993_B957_4A06_BC9A_02E6FB3537CD Rename it to spcd.sys Copy it to C:\Windows\System32\drivers on the affected system Verify that HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SPCD\ImagePath points to \??\C:\Windows\system32\drivers\spcd.sys which was our default In CUCM disable the “Advertise G722 codec” field for the user. Reboot the affected system. When agent is reopened monitoring should be functional. Sorry for all the extraction, but I doubt Cisco would let us just start handing out the file. Thank's I get the same problem with the same version uccx and I try exactly what you doing dan it's work. I don't know because I not yet try at uccx 8.5 but when I look this at page 3 it say support for windows 64 bit and I cannot find the document for 8.0.2 (only get 8.0.1 and it never say support 64 bit) so I think before 8.5 it's not support for 64 bit. Maybe it's the reason why at cad 8.0.2 cannot running spcd driver at 64 bit
OPCFW_CODE
Add host-wayland @dreveman, NOTE: I updated the chroot to version to f7b5bad which included the enter-chroot changes and I added your chroot-bin/host-wayland and the changes in targets/core prior to my tests. I'm not sure /var/run/chrome/wayland-0 is accessible in the chroot - (trusty)denny@localhost:~$ ls -l /var/run/chrome/wayland-0 ls: cannot access /var/run/chrome/wayland-0: No such file or directory And I'm not sure /var/run/chrome is getting bind-mounted as /var/host/chrome either. I get the error displayed - (trusty)denny@localhost:~$ ls -l /var/host/chrome ls: cannot access /var/host/chrome: No such file or directory I tried changing the host-wayland script to check for /var/host/chrome/wayland-0 too but that didn't work. chronos@localhost ~/Downloads/crouton.unbundled $ enter-chroot -n trusty host-wayland Entering /var/crouton/chroots/trusty... echo 'No Chromium OS Wayland server is available.' 1>&2 Unmounting /var/crouton/chroots/trusty... Sending SIGTERM to processes under /var/crouton/chroots/trusty... The 'err' message gets displayed with the entire line too... I'm probably doing something wrong but I can't seem to get it to work. -DennisL @DennisLfromGA does /var/run/chrome/wayland-0 exist outside the chroot? Yes. Sorry, I meant to mention that also: chronos@localhost ~ $ ls -l /var/run/chrome/wayland-0 srw-rw---- 1 chronos chronos 0 Sep 1 20:49 /var/run/chrome/wayland-0 I'm currently on: Version 53.0.2785.81 beta (64-bit) Platform 8530.69.0 (Official Build) beta-channel lulu Firmware Google_Lulu.6301.136.57 Awaiting M53 stable - due out next week prolly... You're likely missing the necessary permissions on the socket. It should have been: srw-rw---- 1 chronos wayland 0 Sep 1 20:49 /var/run/chrome/wayland-0 and I'm not sure why the socket is not in the 'wayland' group in your case. Does it work if you use 'chronos' as username in the chroot? The idea is to add all users who should have access to this socket to the wayland group but I'm not sure how to best accomplish this in the crouton case. I guess it just happen to work for me as I've been using 'chronos' as chroot username.. @dreveman, Here are the 'groups' for user 'chronos' on my system: chronos@localhost ~ $ groups audio video pkcs11 input brltty lpadmin devbroker-access cras chronos chronos-access Also, there is no 'wayland' group name in /etc/group. I have not tried the username 'chronos' in any of my chroots but I can't see how it would matter in my case without a 'wayland' group name. This does not explain why /var/run/chrome is not getting bind-mounted as /var/host/chrome either, which is a requirement for this to work I believe. -DennisL Are you using a Chromebook with Play Store support? If not, then that's likely what's causing your system to be different. No I'm not. Sorry, thought I'd mentioned that. Ok, that's a requirement for host-wayland to work today but I'll look into making some ChromiumOS changes that would allow this to work on all Chromebooks. I thought Freon/Wayland was introduced around M50 and not dependent on Google Play??? You might have it confused with wayland client support. Wayland server support is new and it doesn't really depend on Play. It just happens that Play uses it so that's why it works only on devices with Play enabled. There's no good reason for this limitation so I'll try to change this asap. Manually adding the 'wayland' group should work for now and give the socket the right permissions: https://cs.chromium.org/chromium/src/components/exo/wayland/server.cc?rcl=1472733616&l=3096 Okay, sorry to confuse and complicate things. I'll hold off testing / using host-wayland until my Dell CB13 gets Googe Play. Well I guess my stumbling in the dark did produce some positive results after all. I can delete all of my comments above if you want to keep this PR pristine - just let me know. @dreveman the chroot and the host are 1:1 for file gids, but do not necessarily have the same group associated with that gid. If you're lucky and the /etc/group inside and outside has the same association between group name and group id, then it'll appear the same inside and out. You can force the chroot to be lucky by adding to the list of groups to synchronize. Something to consider if we need to synchronize the wayland group. Oops, wrong reference :( Do we want croutoncycle to cycle through wayland windows like we did for host-x11? I would say "no" because the Chromium OS window manager actually handles looping through the wayland windows properly.
GITHUB_ARCHIVE
A ticket or a receipt? In the old days (when I used to travel) a ticket was a piece of paper. It had some substance. Now there is a multitude of names (and perhaps there was before as well although I feel more uncertain now). Emirates Airlines offer an “e-Ticket Receipt & Itinerary” A local carrier, Bangkok Air, gives me a “eTicket Receipt”, and Brussels airlines “e-ticket travel itinerary”. On Ryanair you get a booking reference only via email and later the (possibility to write a) boarding pass. On a new airline, there is always a moment of uncertainty for me. Did I only get a documentation of a financial transaction in connection with me buying a ticket, or was it the ticket itself? I realize that there are a multitude of other issues and reasons why you might not be allowed to board (passport or perhaps visa issues, health concerns etc.) so the ticket doesn't solve everything, but I still wonder if there is (or should be) an international (IATA?) standard for this document which essentially guarantees your trip and used to be a “ticket” and nothing else? I'm a little unclear what you're asking. Are you asking if there's a standard that airlines have to adhere to in terms of giving you a receipt/ticket for your flight? IATA finished rolling out a standard for e-tickets across all airlines in 2008, are you after more details on that, or on something else? The new equivalent for a "ticket" is an "e-ticket number". The issuing of that "e-ticket number" (sometimes simply called a ticket number given that basically all tickets are e-tickets now days) is the e-ticket equivalent of a physical piece of paper ticket being issued. When you make a new booking, most airlines assign a "Confirmation Number" or "Record Locator" to your booking, which is normally a 5-8 alphanumeric code, something like F3KA8R. These confirmation numbers do not imply you have a paid booking, but more that a reservation has been started. Confirmation numbers are not unique - not only can different airlines allocate the same confirmation number, but they also get re-used by airlines after a previous booking has been completed. At some stage after you purchase your itinerary, it is "ticketed", which is the point where payment is made, and the e-ticket number is allocated. E-ticket numbers are assigned in a format defined by IATA (the International Air Transport Association) that is a 13-14 digit long number. The first 3 digits of the e-ticket number map to the airline that issued the ticket. For example, the e-ticket number<PHONE_NUMBER>890 was issued by United Airlines, based on the "016" at the start. As far as what constitutes your "ticket", the e-ticket number is it. Normally something called a "receipt" or similar will include the e-ticket number on it, whilst an "itinerary" or similar may not - although it would normally include the confirmation number. When talking to an airline, they can normally use either the confirmation number or e-ticket number as a reference to find your booking. There are essentially no paper tickets any more. There are only e-tickets. You can't hold these in your hand. Probably the only part of them that matters is the ticket number. Airlines are free to issue you receipts, itineraries, confirmations or whatever else they want to call them, and will typically include the ticket number, your name, the airports involved as well as dates, times, and flight numbers, and finally their own "booking code" or whatnot that you can use on their website that is different from the ticket number. There's no single name for this mishmash of information, and nor is there a standard form, because none of it is for airlines. It is for the human passengers, their friends and colleagues with whom they share their plans, and perhaps customs officers who want proof of an onward or return journey. I'm sure there is a standard format and name for the actual e-ticket, agreed to by all the airlines. But you never see that, and never need to. To the matter of "do I have what I need to get on the plane?" you can be quite confident if you've used their booking reference to look up your booking on their website, or if you've been able to check in online. You may find yourself handing a printed PDF to a frowning check in agent, or passing your laptop across the desk (I've done this in Berlin and in Munich when Lufthansa people couldn't find my booking, but armed with assorted long numbers from my receipt, all was settled) but that's the exception, not the rule. The ticket isn't something you need to bring to the airport, or can lose. It's something you just need to prompt the airport people to find, which they do. In Europe you often need some printed thing to get through security. I don't think they're very regarding what's written on it as long as it has the right name and dates but they might not accept just looking at a screen and they definitely want something. IATA define the standard for e-tickets, and for some more complicated things you will need to know your e-ticket number @Gilles a boarding pass is needed to get through security everywhere. But they can issue those at the airport :-)
STACK_EXCHANGE
Hey there! Since the Security Research > Network section is pretty empty I'll start this first post about the Man-in-the-middle network attack. The intention of this is to explain how the attack works, but not how to execute it (If you understand how it works, the execution is trivial, I'm strongly against Script Kiddies). I'm just writing as stuff pops-up on my mind, so expect several edits over time... And please give me your feedback/comments/questions and I'll add it to the post! Ok...so let'sgo! How a network works? Well, in order to understand ANY network attack it's important to understand how a network works. Short story short, devices communicate between them with "packets" of information. A Packet (imagine a little box with information inside) travels through this network using cables or wireless communication, but in both cases they need a Network device that interconnect everything and decides the destination/route of the packet (Router, Switch, Access Point, etc...). http://i.imgur.com/ZfOO8UL.png Common network house/office topology How the communication between two computers work? So computer A wants to PING computer B... when you open a console (cmd.exe /Terminal) and type ping computer_b and press enter... what's happening from that moment?? Ok, let me break it down for you. Here there are two concepts that you need to know before hand, what is MAC and what's an IP address. A MAC (Media Access Control) it's an 48bit hexadecimal identifier of you physical network card (maybe you have seen them before, it's something like AA:BB:CC: DD:EE:FF). The IP (Internet Protocol) address it's an 128bits (IPv4) identifier of you computer in a network. Why I'm telling you this? Because there's something that's really important (and where the MiTM attack resides) for network communications and that's the ARP table. http://i.imgur.com/YeW8r4M.png ARP TableThe ARP Table (as you can see above) it's a translation table where you can identify which is the MAC Address of a determined IP (and vice-versa), this table is "dynamically" generated during the time you are on the network, it start as empty and it fills white ARP request. Whenever you want to send a packet over network you need to send that packet to the network router/switch and that guy will route that packet through the network properly... So, computer A sends a packet that is labeled to Computer C, the router/switch receives this packet, reads the label and it send the packet to Computer C, voila! Computer C got it's packet... but what is THAT label. Well, that label it's the MAC address of COMPUTER C. That's how the switch knows which computer route the packet. Ok, so who put that label in the packet? it was Computer A... in simple words, the packet label says: Send it to Computer C (Location: AA:BB:CC: DD:EE:FF), and where does Computer A get the MAC address of Computer C, from Computer's A ARP Table. But wait Swarmdeco, you told us before that that ARP Table is empty when you connect to the network, how does Computer A knows which is the IP Address or the MAC Address of Computer C? Well good question sir! Let me explain... Let say that you ping 192.168.0.5 (IP of Computer C) from 192.168.0.3 (IP of Computer A) (That was the example Swarm... you are complicating things! - I know I know, please try to follow me, it's hard to write this ok??)... so the first thing that Computer A needs to label the packet is to get the MAC Address of Computer C (because you already have the IP) so Computer A sends an ARP Broadcast packet (FF:FF:FF:FF:FF:FF) to the switch (A Broadcast packet it's a packet that is sent to everyone on the network) asking: "Who is 192.168.0.5?" http://i.imgur.com/sEuTR96.png http://i.imgur.com/6D8mCiD.png Wireshark capture of an ARP Broadcast Packet and it response http://FF:FF:FF:FF:FF:FFAbove you have an ARP Broadcast packet asking who has the 192.168.10.164 and to tell to 192.168.10.161... So if SOMEONE on the network has that IP address, that computer sends an ARP packet (like the second picture). Quick note: The MAC address and IP must match, here I just took a random pcap that I had... If someone can take better screenshots please send it to me and I'll update) Ok, so when the computer receive and ARP response it fills our ARP Table...So let's continue with our example... you wanted to ping computer_b, so your computer needs to send a PING packet and label it using the information on the ARP table. The same situation works on the other side of the communication, Computer C needs to ask Who is Computer A and get it's MAC address to fill it's own ARP Table and label the packet properly... By any chance, did you detect where the attack occurs?? Well... the ARP Broadcast packet is for EVERYONE on a Network and ANYONE can answer that call... you can even receive more than ONE answer. What the computer is going to do is to update/register the LAST answer he receives... Developing the Attack Since as you can see, the receiver of the ping packet (or any local-network packet) is the one who is on the label of this box, and the label is determined by the ARP table of the computer that is sending the message, so let's do the following. Let's envenom the ARP table of Computer A and the ARP table of Computer C! How to do that? It's quite simple. Let's imagine the following scenario: You are attacking from computer B with the following MAC address 00:00:00:00:00:BB and the following IP 192.168.0.3, Computer A has 00:00:00:00:00:AA with 192.168.0.2 Computer C 00:00:00:00:00:CC with 192.168.0.4 Let's perform an attack doing the following: Computer B sends an ARP Response to Computer A saying that the MAC address of 192.168.0.4 it's 00:00:00:00:00:BB Computer B sends an ARP Response to Computer C saying that the MAC address of 192.168.0.2 it's 00:00:00:00:00:BB Now the every packet that computer A sends to Computer C it's going to be labeled with the MAC address of computer B and viceversa, that means that we are going to receive EVERY-PACKET of communication between those computer in our machine and access all the information they are saying. In order to not interrupt the communication, we read the packet and forward it to the real receiver of the packet (that way we are undetected). Now we are a "man-in-the-middle" between these two network devices, because we are In the middle of the communication eavesdropping everything. With this kind of attack you can steal Username/Passwords from users, tamper HTTP request (also HTTPS but it's way more difficult), redirect traffic, change the behaviour/look of webpages to troll friends, replace all the images of the browsers victim with cats, and even install malware/virus on your friends computer (don't do that... don't be the guy we hate). This attack is REALLY powerful, but most IDS/IPS will detect it if you don't do it properly, so try it at home! It's a LOT to swallow in the first read, and maybe I'm missing TONS of points and details... but anyway! did my best... I'll improve the guide based on the questions that I receive and the feedback of the community. Mandatory disclaimer: Any actions and or activities related to the material contained within my POSTS is solely your responsibility.The misuse of the information on my posts can result in criminal charges brought against the persons in question. The authors will not be held responsible in the event any criminal charges be brought against any individuals misusing the information on my posts to break the law. (I'm going to use it as my form signature). - Swarm!
OPCFW_CODE
Download Snake Rivals APK, which is an excellent Android smartphone Action game that allows you to enjoy a 3D multiplayer Snake moving game to complete tasks where you need to control your main character snake. So, if you are looking for an Android smartphone 3D action game to complete tasks and win the 3D multiplayer game while moving snakes. Then you are at the right place to download the latest version of this super action game with some ultimate features to enjoy Snake Rivals, the game with amazing options. Now it comes with Slither and Snake with .io similar games. Become the biggest snake by worm or serpiene and don’t let the other snakes and worms hit. There are different small snakes around you moving with some fruits and other things. Your task is very simple just more your snake and control with your fingertip. Then you can easily start moving and destroying other snakes around you to win the game and make bigger snakes by eating fruits and other small snakes. |5.0 and up Description of Snake Rivals APK Snake Rivals is one of the popular Android smartphone action games like other similar snake moving games like.io to eat fruits and different things from the ground. Now you will get amazing options to complete easy tasks of this game where you will get a whole control of a snake to move around the whole area. Slither into classic games in mode and swipe to move your snake around the whole area. Now you will get full control of the main snake character of the game to move around to find fruits and different things from the area. Also, you will get the latest version of this worm snake game to eat as many apples as you can to grow bigger. It’s a stunning 3D snake moving game that comes with lots of new functions and tools that you love to use. Now you have all the options to navigate the main snake that you will get in the start to move in the ground. Also, you can easily change the colors of your snake in the game for free to enjoy. Graphics & Features The royale snake game comes with lots of amazing options and enhanced features to enjoy the premium-looking graphics to enjoy the game. Now you will get sleek graphics to complete tasks where you need to move your snake around the area to collect fruits and snakes eat them to grow bigger. Many people love the higher graphics systems of this enhanced game to complete tasks easily with tons of surprising options. Now you will get some premium color accuracy to enjoy the game while playing and growing your snake. It’s time to eat more apples and worms to grow your snake bigger and bigger. There are lots of enhanced options in the market for you to complete tasks and win some ultimate tools in the game. So, let’s talk about some cool functions of this Snake Moving Rivals game. - Unlimited apples around the whole area where you start moving your snake to grow. - Unlimited hints to move your snake from anywhere you want with your fingers. - Free to Download the ultimate version of this snake moving and hinting game. - Ad-free to enhance the experience of this smart Snake Rivals game for free. - User-friendly interface to access tons of special offers to move the snake. Download Snake Rivals APK This is one of the popular Android smartphone snake games to eat apples and different fruits from the area to enjoy a lot with tons of surprising options. Now you will get the latest version of this snake apple-eating game to enjoy without having any issues. Everyone can easily download the latest version of this Rivals game to enjoy. To download this snake fruit-eating game press the above download button. It will take all users to the secure online page for this snake fruit-eating game. That allows them to grab the apk version of this snake fruit-eating Android game for free. Then press a similar button to these snake fruit-eating games and it will automatically start the downloading process. People Also Ask (FAQs) Is Snake Rivals APK safe? Is Snake Rivals Apk in Android? Is Snake Rivals APK for Android? Here is also all the detailed information about one of the fantastic snake fruit-eating Android smartphone games to start enjoying. That allows its users to start sharing their experiences with snake fruit-eating games. This is also an amazing experience for all their closest companions and snake fruit-eating game lovers. Many interesting features and advanced options are available to unlock snake colors for free and options with the ultimate guide. Suppose you like this exciting smartphone snake fruit-eating game called Snake Rivals APK. Then don’t forget to convey these exciting smartphone snake fruit-eating games with friends and social media platforms. To help all Android users start using their snake fruit-eating games on their smartphones for free. Now you will get the ultimate choices and new segments to comprehend the whole experience of this excellent smartphone snake fruit-eating game.
OPCFW_CODE
In the metaphorical game development gold rush, there are the devs out hunting for gold, and there are the people making shovels for them. If you are such a shovel-maker, you know that your task is a sometimes obscure one. Instead of developing software to be used by hundreds of thousands, you're trying to use precision-engineering to keep your developers in the zone and at their best game-making capabilities. Fortunately for you shovel-makers, there's a champion for you in the halls of Ubisoft (and at GDC). He is, of course, the talented David Lightbown, and lately he's been talking to the people who developed some of the game industry's classic tools on Gamasutra. Back in December, we were lucky enough to be joined by Lightbown for a conversation about the history of the Unreal Engine and the tools that influenced its creation. During our chat (which you can see up above), he also shared some useful thoughts about the work of tool development that we've transcribed for you down below. Bryant Francis, Editor at Gamasutra David Lightbown, UX Director at Ubisoft Technology Group Why We Study Game UX History "I think that understanding the history of game development tools is really helpful as a designer, as a tools developer, to help you understand why certain decisions were made. And to also not try to not repeat those same mistakes." Lightbown: I think that knowing the history of something is super important; if you don't know the history of something you're doomed to repeat it. I think that understanding the history of game development tools is really helpful as a designer, as a tools developer, to help you understand why certain decisions were made. And to also not try to not repeat those same mistakes. The reason why I got into this actually is that two of my other favorite books, Blood Sweat, and Pixels and Dealers of Lightning, about the history of Xerox PARC, and again, talking about the history, we're going back to the 1950-60s, when Xerox PARC was founded, and as some of you may know it was the research facility that came up with the GUI, with Ethernet, and networked printers and all this crazy stuff. The framebuffer, some of the first computer graphics, editing an image was first done at PARC. And this is the GUI that was famously ripped off, so to speak, by Steve Jobs, then by Bill Gates. So knowing the history of this is super interesting and important, and looking at back at some of the GDC postmortems, just during the past two years, look at some of these amazing classic game postmortems. Deus Ex, Oregon Trail, Seaman, Civilization, Pac-Man, Ms. Pac-Man, Diablo, Rez -- and this year, I'm really looking forward to the Bard's Tale postmortem as well -- and I was looking at this and I was saying, "This is really cool, but nobody's doing one of these postmortems on game tools, that's never really been done before." So that was where the idea came from. And I did one just a couple of months ago, the article's released on Gamasutra as an interview I did with John Romero about TED, which was the tile editor that he used, which he created, and all the influences that were behind it, and how that led to the editor that was used to do everything from Dangerous Dave to Wolfenstein 3D, and how it was used and the history behind it. If you look it up on the Gamasutra blogs you can see the article there. So it's the same thing as trying to retain this history and learn more about it and not making the same mistakes. My next step that I wanted to do was to contact a bunch of other people and see if they wanted to talk about their stuff, and that got me a connection to Tim Sweeney, which is how I was able to sit down with him at GamesCon this year in Cologne, Germany, and talk with him about the origins of UnrealEd 1.0. Taking Personal Research Into Day Job at Ubisoft Francis: I'm going to quiz you about your work at Ubisoft for a moment. We were talking earlier about the Ubisoft Assassin's Creed Origins cinematic tools. I'm curious, you've spent all this free time diving into the tool's history, we've looked at these four tools today on Unreal Engine... what, in your work on that specific tool, how have you linked your research with your work? Lightbown: You know, it's funny actually, I can't really necessarily talk in too much detail, but I can say, if you are going to build a cinematics tool for a game engine, it is, I think, imperative that instead of sitting down and starting to write code right away, that you familiarize yourself with how these problems are solved in other software as much as possible. So, in the case of a cinematics editor, go and look at Adobe Premiere, go and look at After Effects, any other non-linear editor. Even audio software. Ableton, Sonar, yes, they're for editing audio files, but the way in which they represent a timeline and how they let you drag and drop your elements and how you manipulate them. There is a consistency in some elements, of how you do that. I think it's so important to figure out what those consistent elements are, and to try to make your tool resemble those consistent interactions as much as possible, because if someone's used Premiere or any other non-linear editor, and they open up your tool, they'll be very familiar with how it works right off the bat. Especially in my work, one of the things that I spend a lot of time doing is just research, being familiar with the tools that are out there, with their history obviously, but being familiar with how they work, and why they do things the way they do. Just because somebody does something a certain way doesn't mean that you should, you have to question some of those things sometimes. "It's imperative that, instead of sitting down and starting to write code right away, that you familiarize yourself with how these problems are solved in other software as much as possible." I kind of think about it as like, natural evolution. There are some species that have evolved a certain way, in a certain environment, and they've died off. And the same thing goes with certain types of software and interactions. If many interactions are difficult to use, that software might not have as much success being adopted and used by people, and then it dies off and the ones that have easier interactions are going to survive, and other people are going to look at those and evolve themselves off of those. So it's sort of like survival-of-the-fittest, to a certain degree. It's not necessarily our job to reinvent the way that we use this software. Go out and look at how other people have solved these same problems, spend some time and I think it really saves you time in the long run, because instead of coming up your own idea, look at how other people do it, implement it that way. Your users will find it more familiar and also you're going to save time, as opposed of designing yourself. You have a great example right there that you can play with an understand how it works. Developing Tools For Other Cultures Francis: You were working under the Ubisoft Technology group. You're working, at least to my understanding, in the middle of a company that has studios around the globe. There are people who speak different languages. I think, as game development gets more global and grows, we're also dealing with the fact that different languages literally have different ways that words are structured. So as a person who makes tools for other professionals' use, if those professions come from another language do you have any thoughts about interactions for other languages and cultures, looking towards the future? Lightbown: That's a great question. It's certainly something that I've thought about, that I've been asked before and I've done some research on this. My understand, again it goes back to what you're familiar with. There was a time when something like Windows was not made to adapt to other cultures. It was made with a specific set of cultures in mind, a specific set of languages. But it was used outside of those cultures, and the people who used it have adapted to it, and it has become what they are familiar with, it has become their "normal," so to speak. My understanding is, you can have certain cultures where colors have a different meaning as opposed to what they mean here. But based on the research that I have read, my understand is that, in the context of a computer software application, people understand that this is different, that red, for example, might mean error. They understand that it doesn't necessarily mean something that is more applicable to their culture. They are able to separate the two, and they understand that, in this context, these icons and colors mean this, and in my cultural context it's different. However, there is something to be said about developers understanding those people that is so key. Understand the people using your tools, understand what's natural to them, and try to adapt your tool to make it familiar to them. It'll make it all the easier and comfortable to use, they'll use it more and more, they'll tell their friends about it, and it can have a snowball effect from there. For more developer interviews, editor roundtables, and gameplay commentary, be sure to follow the Gamasutra Twitch channel. Gamasutra and GDC are sibling organizations under parent UBM Americas.
OPCFW_CODE
Héctor Velarde hvelarde - Simples Consultoria - São Paulo - Joined on Repositories contributed to - collective/collective.cover 33 A sane, working, editor-friendly way of creating front pages and other composite pages. Working now, for mere mortals. - collective/collective.liveblog 3 A liveblogging solution for Plone. - collective/collective.nitf 8 A Dexterity-based content type inspired on the News Industry Text Format specification - simplesconsultoria/collective.blueline 0 Helper viewlets to easily insert code on the layout of a Plone site. - collective/sc.social.like 5 Social: Like Actions is a Plone package (add-on) providing simple Google+, Twitter and Facebook integration for Plone Content Types. Contributions in the last year 1,323 total May 29, 2014 – May 29, 2015 Longest streak 17 days August 18 – September 3 Current streak 5 days May 25 – May 29 - Pushed 6 commits to simplesconsultoria/sc.photogallery May 29 - Pushed 5 commits to collective/covertile.cycle2 May 27 – May 29 - Pushed 2 commits to collective/collective.cover May 27 – May 29 - Pushed 15 commits to collective/collective.nitf May 26 – May 29 - Pushed 4 commits to collective/buildout.plonetest May 25 – May 29 - Pushed 2 commits to plone/buildout.coredev May 27 – May 29 - Pushed 4 commits to collective/covertile.galleria May 27 - Pushed 17 commits to collective/collective.js.cycle2 May 25 – May 27 - Pushed 1 commit to plone/Products.TinyMCE May 26 7 Pull Requests - Merged #12 Add rebuild_i18n-sh part with inline script definition - Merged #5 Refactor development and CI configurations - Open #3 Add Add Photo Gallery tile for collective.cover - Open #235 Fix Travis CI configuration - Open #234 Bring back original convention on sorting imports - Open #129 Avoid performance issues when having many users - Open #524 Remove carousel tile for core 9 Issues reported - Open #525 Remove PFG tile from the package core - Open #237 API not compatible with Plone 4.1 - Open #236 Travis CI configuration error leads to package only being tested under Plone 4.3 - Open #128 Inconsistent output from code analysis - Closed #127 ValueError installing package - Open #12 It is currently possible to have more than one slide with different behaviors? - Open #11 Refactor tile template to allow definition of pagers - Open #528 Default theme not liquid - Closed #5 Release 1.0b1?
OPCFW_CODE
TiPb's getting a lot of questions about Notification Error "no valid 'aps-environment' entitlement string found for application, what it means, and what can be done about it. Reason being, Google released their new Gmail for iPhone and iPad app today today, and subsequently pulled it due to some launch-time bugs involving Push Notifications. Short answer: Google messed up Push Notifications and there's nothing you can do about it until Google fixes it and Apple pushes out that fix. In the iOS Provisioning Portal you need to various different certificates. For all apps you'll normally generate a Development, AdHoc distribution and Store distribution certificates. For push enabled apps you also need to generate Development and Production Push certificates. What I think happens is that most people start by generating and downloading the 3 standard certificates and at some later point generate the Push certificates. However when you create the Push certificates it modifies the standard certificates in some way that tells the OS that it can be used for push notifications. You'll often re-generate/download the Development and AdHoc certificates as you add new devices for testing, but you only have to re-genrate the Store certificates once a year when renewing with Apple. So again what Google probably did is create the standard certificates, then create the Push certificates and didn't re-generate/download the Store certificate. It's a really easy mistake to make and there's no indication of a problem anywhere within the submission process to Apple. It's also a pretty trivial thing to fix and I'd expect Google to re-submit and Apple to expedite the release pretty quickly. It does make me wonder why Apple didn't catch this issue, my guess is something about the way they run apps prevents this error from showing up. As far as I know the only way to see if this is a problem or not is to run the following command codesign -dvvvv --entitlements - and look for the following two lines in the output We may earn a commission for purchases using our links. Learn more. You ain't seen nothing until you've seen these iPhones used as gate posts The humble iPhone is great at doing all kinds of things. including being used as gate posts, apparently. Tim Cook pens open 'Speaking up on racism' letter Tim Cook has written an open letter titled 'Speaking up on racism.' If Apple made an iPhone mini, this might be what it would look like We're never short of iPhone concepts but this thing takes the biscuit. It's a sort of iPhone mini, but not. It's hard to explain, but it's awesome regardless. Get back on task with these Wunderlist alternatives for iPhone and Mac Upset Wunderlist is no more? Here are some solutions to get you back on task.
OPCFW_CODE
Brief Description: The finger(1) daemon is vulnerable to a buffer overrun attack, which allows a network entity to connect to the fingerd(8) port and get a root shell. Detailed Description: Fingerd is a daemon that responds to requests for a listing of current users, or specific information about a particular user. It reads its input from the network, and sends its output to the network. On many systems, it ran as the superuser or some other privileged user. The daemon, fingerd uses gets(3) to read the data from the client. As gets does no bounds checking on its argument, which is an array of 512 bytes and is allocated on the stack, a longer input message will overwrite the end of the stack, changing the return address. If the appropriate code is loaded into the buffer, that code can be executed with the privileges of the fingerd daemon. Component(s): finger, fingerd Version(s): Versions before Nov. 6, 1989. Operating System(s): All flavors of the UNIX operating system. Other Information: It can be accessed from any remote network. Effects:You get the same access as the fingerd daemon. Detecting the Vulnerability: * Compare versions with those listed in "Vulnerable Systems." If it matches any of those, you are vulnerable. * Connect to your fingerd daemon and type more than 528 (= 512 + 16) characters (any will do). If your daemon crashes or terminates the connection with no data sent back, you probably have the vulnerability. * Check your fingerd source code for gets; the offending code is most likely gets(line). If you find this, you are vulnerable. (In the version we have, it's at line 40.) Fixing the Vulnerability: * Upgrade to a newer version. * Disable fingerd. If you must run it, make it an anonymous user (like nobody; even then, a remote attacker can execute programs as that user on your system. * Modify your source code, recompile, and reinstall. The modification is to change gets(line) to fgets(line, sizeof(line), stdin). Keywords:fingerd, buffer overflow, gets, fgets, Internet worm Attack Methods or Tools: Not provided. Advisories and Other Alerts: Donn Seeley, "A Tour of the Worm", Computer Science Department, University of Utah, November 1988. Related Vulnerabilities: None yet. First Report We Know Of: by Jon Rochlis, Mark Eichen, date mailing lists, in Nov. 5, 1988 Revisions of Database Record 1. Omar Vanegas(Jul 21, 1998): Entered into DOVES. 2. Mike Dilger(original): Entered into original database.
OPCFW_CODE
Please also see the 'Common Diseases' tab for more interesting information and links to publications. 2012 Wildlife Pathology Short Course, Small Animal Necropsy Workshop Notes During February 2012 the Registry held a Small Animal Necropsy Workshop in conjunction with the Wildlife Pathology Short Course. The notes from this workshop can be found here. 2012 Wildlife Pathology Short Course, Practical Laboratory Skills Workshop Notes During February 2012 the Registry held a Practical Laboratory Skills Workshop in conjunction with the Wildlife Pathology Short Course. The notes from this workshop include a number of colour images of common faecal parasites and differential cell morphology for amphibians, reptiles, birds and mammals. Sick and Dead Bird Surveillance The purpose of this document is to provide guidelines for the collection of samples that will lead to a diagnosis being reached in sick and dead wild birds collected in Australia, and will allow testing to rule out the presence of animal diseases of concern to Australia (such as avian influenza, West Nile Virus and Newcastle Disease). Wild Bird HPAI Surveillance Manual A Joint Publication of the Food & Agriculture Organisation of the United Nations, and the Zoological Parks Board of NSW. Rome, 2006. A report compiled from a 2 day workshop at the University of Melbourne, November 2012, about the Christmas Island Flying Fox A 2014 Christmas Island Flying Fox status report. Pathology of Australian Native Wildlife Medicine of Australian Mammals Radiology of Australian Mammals A Guide to Health and Disease in Reptiles and Amphibians This 176-page title is the only pet owner/breeder reference on health and diseases in reptiles and amphibians in captivity, published in Australia. Written by practising exotic veterinarians, Dr Brendan Carmel and Dr Robert Johnson, all aspects regarding the captive care of snakes, pythons, lizards, turtles and frogs are presented in a simple to follow layout. The 240 colour images show examples of typical health problems to assist the herpitologist in recognising signs as well as information about the treatment or action to take to rectify or reduce the spread of disease and support the reptile/amphibian back to good health. Too many ill animals are presented to veterinarians by keepers who are mortified when they realise that, through their lack of understanding of correct housing, hygiene, heating, lighting, feeding and breeding procedures, they may have contributed to the onset of disease in the animal in their care. Although this is not always the case, a large percentage of sick pythons, lizards, turtles and frogs are due to incorrect management. Become informed to prevent health problems from entering your collection. Furthermore, become better equipped to recognise signs of illness before further development may prohibit a return to good health. This book is an essential reference for any responsible keeper of reptiles and amphibians. Available from ABK/Reptile Publications at www.reptilepublications.com
OPCFW_CODE
Is it better to put user data in SharedPreferences or SQLite? I am making an app that is able to log in. Each time the user log in, the app will retrieve user data and save it locally so that each time user need to do something which need user information, the app need not to retrieve it again. Is it better to put it in a SharedPreferences or SQLite? Since it will be always only one user data need to be stored, I think to put it in a SharedPreferences, but it makes my app have so many key-value data. I am also able to use SQLite instead. But, it looks awkward to have a database that always only have one record data in the table. Is it a good practice to put only one record data in a database and keep replacing it once the data changes? Or is it better to put it in a Shared Preferences and make the SharedPreferences a bit messy because it has many key-value data. I think SharedPreferences is best way to store profile data.... In SharedPreferences also we can replace stored data. public class SharedPref { private SharedPreferences pref; private Editor editor; private Context context; int PRIVATE_MODE = 0; // Sharedpref file name private static final String PREFER_NAME = "SharedPreferences"; // All Shared Preferences Keys private static final String IS_USER_LOGIN = "IsUserLoggedIn"; @SuppressLint("CommitPrefEdits") public SharedPref(Context context){ this.context = context; this.pref = context.getSharedPreferences(PREFER_NAME, PRIVATE_MODE); this.editor = pref.edit(); } //Set current tab position public void setCurrentTab(String currentTab) { editor.putString(Constant.TabActivity.CURRENT_TAB,currentTab); editor.commit(); } public void setTabActivityTitle(String title) { editor.putString(Constant.TabActivity.TAB_ACTIVITY_TITLE,title); editor.commit(); } public String getSessionValue(String key) { return pref.getString(key, null); } } It totally depend on your data, which you want to store. I would personally recommend you to store in sharedpreferences. SQLite is used to store large, Structured and Organized data where as sharedpreferences are used to store small, unstructured data like login info, user preferences etc I would use SharedPreferences for your scenario. A database would be good if you had several user profiles, but since you only want to store one user profile, it doesn't make sense to create a database for just one row. You shouldn't use neither SQLite nor SharedPreferences, the thing is, android done it for you, you should something that's called AccountManager Why you should it: This class provides access to a centralized registry of the user's online accounts. The user enters credentials (username and password) once per account, granting applications access to online resources with "one-click" approval. You can read about this class here : http://developer.android.com/reference/android/accounts/AccountManager.html But what you should know is that you can also remove accounts there, so it's especially good for you, here is tutorial Step by step: http://www.finalconcept.com.au/article/view/android-account-manager-step-by-step-2
STACK_EXCHANGE
Party OnPosted by James Currently, I’m intending to stand as an Independent candidate. I believe that candidates should be a lot more independent, and that parties, while useful for some things, tend to wield far too much power. However, we find ourselves in a political climate that doesn’t reward the independent, and doesn’t allow lone voices through. Also, there are some built-in biases in the system against the independent candidate. There has been an open question for a while on the OpenPolitics Manifesto over whether it should form a party. Some think we should, some not. I personally fall on the side that says that forming a party is a good thing, if only for short-term tactical reasons. It would be a party designed to make itself unnecessary by reforming the democratic system. The alternative view is that we would be better as a loose collection of independents. This post is a long-form contribution to that debate, really. In my view, the disadvantages of remaining independents mainly come down to brand awareness. When you have to get home a very simple strong message to as many people as possible, being able to present that in a unified way that people will come across in lots of places is a big advantage. Nuanced messages about being an independent but having a shared manifesto will get lost. Also, the fact that parties get to have their party name, logo, and tagline on the ballot paper is a massive advantage to them over independents, who get none of that. Anyway, a thought occurred to me. We want to remain a decentralised network of contributors, but capitalise on shared recognition and branding. How can we be a party without being a party? What if we register a party with the Electoral Commission, but avoid the use of the word “Party” like hell? Instead, how about the OpenPolitics Network? That way we get the best of both, I think. Anyone could stand as a candidate for the network as long as they are standing on the manifesto, but there’s no central party control telling them what to do, or even looking like it does. Each candidate is free to run their campaign however they want; after all, they’re basically independents. Unfortunately, the rules state that parties do have to have some centralised aspects - a leader, and a couple of other posts, but these can be elected from the pool of contributors, and while they may technically exist, we can avoid using them in the usual way. I think this might be a way to gain the advantages of party recognition without having to conform to the existing views of parties. Oh, and there is no registered party in the UK at the moment that uses the word “network”, so I think that’s a good omen.
OPCFW_CODE
questions related to offline geotiff image and export option Hi, It looks very nice development. I have few queries: 1.Can we run this module on Geotiff image already available in my machine if yes, how? how to convert it to slippy map format ? do one requires full scene or multiple pieces of one scene? 2. Pls give guidelines for offline (without internet ) processing. If I have segmented version (classified building image in png format) of the images available and just want to use your post processing tool for saving in to geojson or shape file, how to do that ? thanks Robosat only works with slippy map tiles; you can tile e.g. into 256x256 or 512x512 pixel tiles via https://gdal.org/programs/gdal2tiles.html https://github.com/cogeotiff/rio-tiler Here's a quick and unoptimized tiler script; if your use case gets more involved you should definitely look into better ways of doing this: https://gist.github.com/daniel-j-h/69a967d63d833f74a123e2540bd146b9 I don't understand what you mean by offline processing. All tools except for downloading satellite tiles from a remote server work "offline". Only use the post-processing tools then. @jaigsingla, to complete @daniel-j-h answer, on imagery tiling: You could also give a look on rsp tile, from https://github.com/datapink/robosat.pink Usually faster than the others, able to deal with coverage imagery, and to skip nodata tiles... HTH, Robosat only works with slippy map tiles; you can tile e.g. into 256x256 or 512x512 pixel tiles via https://gdal.org/programs/gdal2tiles.html https://github.com/cogeotiff/rio-tiler Here's a quick and unoptimized tiler script; if your use case gets more involved you should definitely look into better ways of doing this: https://gist.github.com/daniel-j-h/69a967d63d833f74a123e2540bd146b9 I don't understand what you mean by offline processing. All tools except for downloading satellite tiles from a remote server work "offline". Only use the post-processing tools then. Hello daniel, All tools are offline , I agree. But how to install and use this library on offline machine is a difficult task. Request you provide all dependencies as a zip or virtual environment image >> thanks Look into docker image save docker image load to create a self-contained docker image tarball, get it onto your machine, and load it again. Hi daniel, I could build everything offline. Now i want to use it along with bavaria dataset : Following is available in datasets: Model as check point geojson file of the heat map area two comparison images After going through your post, we need followings: 1.Any geo-referenced image 2. From building shape file, OSM extract in geoson format of that area. 3. these two files should be converted to slippy format using ./rs cover 4. rs rasterize on osm slippy format files 5. rs train like that I want to simulate entire scenario using bavaria data-sets first but i am unable to fetch the same. pls guide. Please read the documentation for our tools and read through the diary posts here https://github.com/mapbox/robosat#overview they explain in detail how to run the pipeline, where to get the training data from, and so on. ok thanks. I am directly starting with my own data. i executed ./rs cover --zoom 20 .geojson building.tiles but it gives error of xy() takes from 2 to 3 positional arguments but 5 were given. Do you have a stack trace? Which version are you running, latest? How does your geojson file look like? Stack trace. I used latest version only. geojson file is clipped over Ahmedabad, India . 0%| | 0/482 [00:00<?, ?feature/s] Traceback (most recent call last): File "/home/deep/anaconda3/envs/fastai/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/deep/anaconda3/envs/fastai/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/deep/building_extraction/robosat-master/robosat/tools/main.py", line 58, in args.func(args) File "/home/deep/building_extraction/robosat-master/robosat/tools/cover.py", line 30, in main tiles.extend(map(tuple, burntiles.burn([feature], args.zoom).tolist())) File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/supermercado/burntiles.py", line 62, in burn all_touched=True) File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/rasterio/env.py", line 397, in wrapper return f(*args, **kwds) File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/rasterio/features.py", line 280, in rasterize for index, item in enumerate(shapes): File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/supermercado/burntiles.py", line 59, in ((project_geom(geom['geometry']), 255) for geom in polys), File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/supermercado/burntiles.py", line 12, in project_geom for part in geom['coordinates'] File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/supermercado/burntiles.py", line 12, in for part in geom['coordinates'] File "/home/deep/anaconda3/envs/fastai/lib/python3.6/site-packages/supermercado/burntiles.py", line 11, in [mercantile.xy(*coords) for coords in part] TypeError: xy() takes from 2 to 3 positional arguments but 5 were given How did you generate the GeoJSON? What type of features do you have in there? From what I can recall we were supporting "Polygon" features only due to the supermercado dependency limitations. For our extract tool see https://github.com/mapbox/robosat/blob/3ae9c7e1d58fc446b7c8df3835c628c40ca85ba2/robosat/osm/building.py And supermercado upstream tickets https://github.com/mapbox/supermercado/issues/13 https://github.com/mapbox/supermercado/issues/30 There is osm building shape file available over ahmedabad. Cropped subset over Ahmedabad was taken as an input and using QGIS I just saved this in geojson format and use for further processing. Yeap you have features in there which are not Polygon. The upstream supermercado library does not support that, see tickets linked above. I recommend you generate a GeoJSON file with Polygon features only. @jaigsingla You could also give a try to rsp rasterize resilient with all GeoJSON geometries types. Cf: https://github.com/mapbox/robosat/pull/138 https://github.com/datapink/robosat.pink/blob/v0.5.5/docs/tools.md#rsp-rasterize @ocourtin can you share practical example of this ? and i guess we need to perform tilling on this also?? Yeap you have features in there which are not Polygon. The upstream supermercado library does not support that, see tickets linked above. I recommend you generate a GeoJSON file with Polygon features only. @daniel-j-h but these are polygons only ?? @ocourtin can you share practical example of this ? https://github.com/datapink/robosat.pink/blob/v0.5.5/docs/101.md#retrieve-and-tile-labels and i guess we need to perform tilling on this also?? If your labels are vector ones (GeoJSON or PostGIS), rsp rasterize performs the tiling for you. If your labels are PNG ones (it could happens), rsp tile --label is the way to deal with... HTH, @ocourtin what are config and cover parameters here. what inputs shall I give here. Hi, It looks very nice development. I have few queries: 1.Can we run this module on Geotiff image already available in my machine if yes, how? how to convert it to slippy map format ? do one requires full scene or multiple pieces of one scene? 2. Pls give guidelines for offline (without internet ) processing. If I have segmented version (classified building image in png format) of the images available and just want to use your post processing tool for saving in to geojson or shape file, how to do that ? thanks @jaigsingla Any luck resolving your issue on converting segmented png to geojson? Stuck on same. @hamzahkhan not tried yet. @daniel-j-h pls make this Tanzania dataset available as a sample here. it is very difficult to fetch exact dataset without internet. I no longer have access to the dataset and I'd have to fetch it from scratch myself, too.
GITHUB_ARCHIVE
What regression model to use when independent variables are percentages to predict % outcome? "Independent" variables: time spent (% at work, % sleeping, % exercising), body mass composition (% fat, % muscle, % bone) Dependent variable: Smoker (1) or Non-Smoker (0) What kind of regression model should I use when subsets of the "independent" variables are percentages and are therefore not completely independent of each other? You could include 2 of the 3 percentages. Eg. include % at work, % sleeping. Similarly for bmc. Obviously you need to use a logistic/probit type model that is suitable for binary outcomes. Thank you so much ved, I see how dropping one of the variables (preferably with the lowest explanatory power) will make the other varibles in the same category independent. Unless the percentages must add to 100, you might want to include them all anyway. (In the two examples given, the percentages ought to total less than 100 and the totals would likely vary.) Such variables (with a total of 100%) are known as compositional variables. You could look into compositional data analysis, where transformations of such variables are studied. Some othet questions touching into this area: http://stats.stackexchange.com/questions/35265/log-ratio-compositional-analysis http://stats.stackexchange.com/questions/95867/proportions-compositions-in-logistic-regression http://stats.stackexchange.com/questions/89717/multivariate-data-analyis-of-compositional-data Your response is binary and so you probably want to look at something like a binomial GLM for that, such as logistic regression. Having a group of $k$ predictors that add to 1 (e.g. the $k=3$ body proportion predictors) would imply that at most you can have $k-1$ of them in the model because of the multicollinearity issue. However, I'm going to suggest that you may also want to transform those percentages; they're unlikely to enter the model linearly; indeed with a logit link my first thought would be that you might want to try something like the logit of the proportions instead. I'd also go for logistic regression since you don't mention that you have a time variable specifying the time from inclusion in the study until start of smoking or censoring (end of study); in that case a Cox regression would be better. I doubt that there is any difference in using percentages as predictors as compared to other continuous variables. For example, BMI (Body mass Index) is neither a directly measured predictors, as it is derived from a division of two units. As Glen_b mentions, these predictors might not be truly linearly associated with the dependent variable. But transforming them might make predictors more difficult to interpret and journals typically don't like transformed variables if they are the predictors of main interest. "journals typically don't like transformed variables if they are the predictors of main interest": I don't see any evidence for this assertion. As an Editor of a statistical journal, I don't think it is especially accurate or helpful. Reviewers and editors in my experience just want there to be good reasons for any transformation. I agree partly. Let's say that one examines the effect of blood pressure on risk of alzheimers disease. Blood pressure being the predictor of main interest and adjustments are made for various covariates. If one discovers that entering blood pressure linearly (without any transformation) will provide an adequate fit but less good than entering the log of blood pressure, I believe that many would prefer the linear version, due to simplicity of interpretation. However, I understand your point of view. This often makes me wonder why referees don't ask for regression diagnostics more frequently.. You've given an example in which a transformation has mixed benefits and reviewers might fairly wonder whether it is worthwhile. That's fine by me and consistent with my position. I don't think one example, or even the implication of many similar, substantiates your much broader claim. Indeed, there are many other examples in which transforming predictors is utterly standard, as when fitting power functions.
STACK_EXCHANGE
Google Lightning Talks are short versions of presentations that might have been shared at Google Webmaster Conferences around the world. Given that in-person events are cancelled for the foreseeable future, Google is adapting its conference content for the web. Videos in the Google Lightning Talks series are scheduled to be published throughout the year. Splitt dedicates the first installment of Lightning Talks to discussing “everyone’s favorite” topic: links. Splitt goes over the important role links play for both users and search engine crawlers. Links Matter to Humans and Bots Links serve the obvious purpose of letting users navigate between pieces of content. But site owners must be mindful of the role links play for bots and search engines as well. First and foremost – links allow crawlers to find other pages of a website. Crawlers discover and index other pages of a website by following links from one page to another. By following links, the crawler gains an understanding of site structure and information architecture. This is helpful for understanding what pages might be relevant for a given topic. Creating a link is not as straightforward as you might think, cautions Martin Splitt. Here’s what Splitt recommends. Do: Keep it Straightforward The most straightforward way to put a link on a site is to use an ahref tag. Don’t: Leave out the ahref attribute “That’s not a good idea,” Splitt says. Don’t: Use Psuedo URLs It also doesn’t help to add an ahref attribute without a useful URL, or with a “pseudo URL” like in the example below: The result is the same as a link without an ahref attribute, which means it’s not a good idea. Don’t: Use Buttons Using a button may seem like a viable option for adding a link to a page, but that’s not a good idea either. The rule of thumb is – if a link triggers something to happen on the current page it should probably be a button. On the other hand, if a link takes a user to another piece of content that wasn’t on the page before, then it should be a standard link. Don’t: Rely on click handlers This breaks the built-in accessibility features and isn’t a good idea. Do: Use Semantic HTML The bottom line to all of this is – use semantic HTML markup and point your link to a proper URL. What’s a proper URL? That’s explained in the next section. Using “Proper” URLs These URLs are typical examples of what is considered a “proper URL”: Those are proper URLs because they contain the following attributes: - A protocol - A host - A path to a specific piece of content - A fragment identifier (optional) Beware of Fragment Identifiers Given that fragment identifiers are optional, and point to locations within the same piece of content, crawlers ignore them. That’s especially important to note if you build a single page application with links full of fragment identifiers. Crawlers will not follow the links, so they will not be able to understand the web app. Here are your key takeaways from the first installment of Google Lightning Talks: - Use proper link markup. - Do not use fragments to load different content in single page apps. - Don’t leave out the ahref attribute. - Don’t use psuedo URLs. - Don’t rely on click handlers. - Don’t use buttons. Does Google crawl fragment identifiers? Given that fragment identifiers ( href=”#fragment-identifier” ) are optional, and point to locations within the same piece of content, crawlers ignore them.
OPCFW_CODE
All-metal, portable, high-speed, low-cost advanced multi-tool desktop factory derived from LulzBot Taz and TazMega. Specifically designed as an easy upgrade path for existing Taz machines. Personally, this upgrade will be applied to my old Taz3 machine, to provide redundancy and additional throughput for the TazMega. - All-metal, no plastic mounting brackets. - Belt-drive X/Y, ACME threaded rod Z. - Composite metal structure with timber reinforcement. Maximum rigidity at minimum cost. - Simple assembly, 8 wheel minumum, no carriage plates, no spacer blocks. - Support for piercet’s bed plate, among several others. - Gantry plates are the only custom hardware required. - Specifically designed as a low-cost upgrade for existing Taz machines. - Specifically intended for use with MightyTool. Please examine the design closely. Alternative subsystems and innovative simplifications have been included. GitHub Repo: https://github.com/mirage335/TazStiff Total BOM cost is <$650 from, mostly from OpenBuilds as follows. - ~$100 - Gantry plates from eMachineShop. - $156.90 - Cast Corner Bracket, Black Angle Corner Connector, 90 Degree Joining Plate. More expensive than 3D printed components, and possibly other metal parts sources, but rigidity is especially important here. - $105 - Wheels and eccentric spacers. Some of the wheels and eccentric spacers are probably unnecessary, being included as extra margin for the first prototype. - $41.50 - ACME lead screws. Probably the easiest place to save money, as a variety of other leadscrews could be made to perform equally well for the Z-axis given appropriate high-compression anti-backlash nuts. Are you planning on using a RAMBo or some other controller? What software will you be using to slice models for the MightyTool (subtractive operations vs. the additive 3D printing operations)? Planning to keep the RAMBO electronics, with this build. Might upgrade to a custom Duet/Duex inspired design later though. Similarly, planning to use Slic3r to start with, though it would be great to finally compare other modern slicers. Especially MatterControl, which the SeeMeCNC folks recommend highly. Although the rigid table underneath should compensate, using corner brackets for the Y-axis results in a slight twist that has proven difficult to eliminate. TazMega uses plates for this, which should definitely be used for the TazStiff as well in future builds. Assembly mostly complete, held up by lack of tee nuts. Also, plastic and metal 90deg joining plates have been directly compared. - https://youtu.be/4F2My8FK1iI . Mechanical assembly complete. Belts and electronics assembly to go. Again, the lack of large flat joining plates on the Y-axis has bit me. Future TazStiff builds really should use the same plates as TazMega. Now, Marlin does not correctly calculate the bed compensation matrix. Initial bed topography, as measured by differential optical probe. Like TazMega, and despite the lack of flat joining plates mounting the Y-axis, initial alignment is actually quite good. Bed is flat to ~0.25mm. Still, twisting is the dominant error term, which MarlinFirmware cannot compensate. TazStiff’s rubber foam and deck screw mounting to the table underneath should be able to correct this. Finally starting to get really decent prints. Photos attached. Video posted on YouTube - https://www.youtube.com/watch?v=hoJDu2Lae4E . Print speed for the second part was 120mm/s.
OPCFW_CODE
Is it possible to have a shared port for ssh tunneling that segregates sessions based on client? For example, if I have client1 using a tunnel to port 9999 on a shared server for an ssh tunnel to another server like below: Client1 port 8888 —> shared host port 9999 —> server1 port 4477 Is there a way that port 9999 on the shared server can also be used by another client to tunnel into a different end host like below? Client2 port 8888—> shared host port 9999 —> server2 port 5568 The key point is that the client local port forward to the shared host port would always be the same ports. From the shared host the tunnel can be extended to the desired endpoint but the tunnel would still be passing over the same port on the shared host. I’ve tried a number of things but either ended up with the client2 session rejected over that port or traffic from client1 being fed to client2. I realize this is not an ideal setup but unfortunately, it’s what I have to work with. --Edit-- Adding example: There are 2 web servers that are only reachable through a shared host. In this example, they will just reply with either 'server1 response' or 'server2 response'. Authentication is handled by rsa keys and user1 and user2 are using completely separate hardware. user1 connects first: user1@localbox1$ ssh -t -L 8888:localhost:9999 email@example.com "ssh -L 9999:localhost:80 firstname.lastname@example.org" email@example.com $ The tunnel works as expected: user1@localbox1$ curl "http://localhost:8888" server1 repsonse user1@localbox1$ When user2 attempts to connect, they are given an error message and the tunnel from user one is used: user2@localbox2$ ssh -t -L 8888:localhost:9999 firstname.lastname@example.org "ssh -L 9999:localhost:80 email@example.com" bind: Address already in use channel_setup_fwd_listener_tcpip: cannot listen to port: 9999 Could not request local forwarding. firstname.lastname@example.org $ User1's session is then forwarded to user2's tunnel. user2@localbox2$ curl "http://localhost:8888" server1 response user2@localbox2$ If the tunnel had been created the response would have been server2 response instead of It's important to note that the remote command for the second tunnel is an unfortunate necessity, as the end host address may change and the values cannot be determined from the user end. For simplicity in this example, I've just used some actual end host addresses.
OPCFW_CODE
Uri.https() throws when Map<String, dynamic> queryParameters has values other than String or Iterable This tracker is for issues related to: Dart core libraries: uri.dart Dart SDK Version (dart --version): 2.13.1 using MacOSX using Flutter If you try to create a Uri.https() with Map<String, dynamic> queryParameters with values other than String or Iterable it will fail to parse. I think the source of the issue will be around here: uri.dart:2163 queryParameters.forEach((key, value) { if (value == null || value is String) { writeParameter(key, value); } else { Iterable values = value; for (String value in values) { writeParameter(key, value); } } }); It looks like they only manage cases where the value is either one of the above mentioned types. I got a hunch that someone even thought about this, since the declaration of the function looks like this: uri.dart:2139 static String? _makeQuery(String? query, int start, int end, Map<String, dynamic /*String|Iterable<String>*/ >? queryParameters) They have the correct types in a comment right next to the dynamic! That is correct. It's not documented, and it should be. Likely the documentation wasn't updated when we started allowing multiple strings as a valid value, before that the type of Map<String, String> was sufficient The documentation for the default Uri constructor explains what's expected for queryParameters, but the Uri.http/Uri.https documentation don't mention any of that (nor do they refer readers to the constructor for the default constructor). Additionally, I think Uri should generate a better error message instead of assuming that queryParameters is an Iterable if it isn't a String. The current failure mode is very confusing. And, of course, another option would be to make queryParameters friendlier and less reliant on documentation by making it automatically call .toString() on non-Iterable Map values. Agree with everything until the last one. Calling toString on values can hide bugs. If you put in something which was intended to be an iterable, but you did an off-by-one error and ended up with something else, you'll immediately be told if we throw. If we just do toString, it might "work" for a while. For values that are dynamically typed, so you don't get static type checking help to find bugs, I find that being very strict about which values you accept, and validating early and often, is usually better in the long run. I agree. The best improvement for me would be a more informative Exception, telling me about the only expected values being String or Iterable. That way i'd have a clear and direct explanation in front of me about the issue that just happened. The next best thing would be for me to have a class for the arguments, and queryParams should take a Map<String, QueryParamValue>? like so: ` class QueryParamValue { final String? one; //name up for debate final Iterable? multiple; //name up for debate QueryParamValue({this.one, this.multiple}) : assert(one == null && multiple == null, "one or multiple should be a non-null value"), assert(one != null && multiple != null, "only one of [one] or [multiple] should be used, not both"); } ` The third best helpful solution from me would be the better documentation on dart.dev as @jamesderlin mentioned. I agree with @lrhn on the .toString() being problematic. Either make the unexpected not doable at all, or give me a clear indication about what the unexpected thing was, but don't try to figure out what i could have meant, that would open a whole new box of possible exceptions, making the whole issue more complicated and potentially confusing. Any update on this? I encountered this twice today before figuring it out.
GITHUB_ARCHIVE
Single Status Update I compiled a custom version of ffmpeg with Netflix's VMAF enabled and I've spent all day today trying to figure out the best settings to use when transcoding my H.264-files to HEVC using NVENC. VMAF is a tool for assessing quality of transcoded video when compared to the original file. It's quite neat. It's also horribly, horribly slow. I am only interested in constant quality, ie. constqp in ffmpeg-parlance, and I have no interest in playing around with CBR or VBR. With that in mind, I've come to the following conclusions: - preset: No difference between slow and hq, doesn't matter which one I use. - nonref_p: No difference whatsoever, didn't change VMAF-score in any way. - weighted_pred: Enabling does improve quality and seemingly also reduced output's filesize slightly. - rc-lookahead: Enabling does improve quality, but a value of 4 produced the exact same quality as 32, so it doesn't look like there's any point in going past 4. - spatial_aq: Enabling does improve quality, with a caveat! - aq_strength: The higher the value, the larger the output-file and quality, but the filesize grows disproportionally when compared to the improvement in VMAF-score. In fact, in my testing it was better to use aq_strength of 1 and go for lower qp as I can get a similarly-sized file with much higher PSNR, SSIM and VMAF that way! - profile: Couldn't test, output is somehow broken. Causes VLC to crash if using DXVA to decode and messed-up image using D3D-decoding. No idea if the issue is with ffmpeg or NVIDIA's drivers, but the result is the same whether I am encoding under Windows or Linux and whether I am using self-compiled ffmpeg or someone else's build. - tier: No effect. With the above in mind, my ffmpeg-command looks as follows at the moment: ffmpeg -hwaccel nvdec -c:v h264_cuvid -hwaccel_output_format cuda -i input.file -c:v hevc_nvenc -preset:v slow -rc:v constqp -qp:v 26 -profile:v main -tier:v main -spatial_aq:v 1 -aq-strength:v 1 -rc-lookahead:v 4 -weighted_pred 1 -c:a copy output.file Disclaimer: this is on a Pascal GPU. The results might look different on e.g. Turing. Ain't got no Turing, though, so can't test. Also, I ain't no ffmpeg- or video-encoding guru, so these may not be the actual bestest of best settings; these are only the best I've been able to come up with so far. EDIT: Oof. Apparently my test-clip wasn't long enough as when I tested with a full-length video, weighted_pred caused my GPU to crash! Seems to only work on short clips, which sucks!
OPCFW_CODE
The production of a cheep and efficient early era 3 transistor microchip for basic electrical functions. The then standard silicon\germanium etching process. |Made in.||About ~1964.| |Transistors per chip.||3.| |Power supply.||Low or battery power.| |Still in use.||Yes, but with increased efficacy and reduced size.| It was meant to replace transistors like the American 2N34 PNP Germanium Alloy Junction Transistor, so the circuit would have 1, not 3 cans on it! Like all ICs, it is made to a JEDEC standard and ID numbering code after 1953. It was one of the Netherlands's first moves in to the modern electrical business and it was seen as a national icon at the time. A 3 legged metallic TO-18 transistor packing unit shell. T-18 stands for Transistor Outline Package, Case Style 18. It was often mistaken for the 2N107 germanium alloy junction PNP transistor (General Electric (GE)) transistor at the time. http://www.digikey.co.uk/product-search/en/integrated-circuits-ics?k=integrated%20circuit&WT.srch=1&mkwid=81mPp1NQ&pcrid=3082756802&pkw=%2Bintegrated%20%2Bcircuit&pmt=bb&pdv=c, https://www.utsource.net/ic-datasheet/TAA320-560410.html?utm_source=bing&utm_medium=cpc&utm_campaign=20131230&utm_term=TAA320&utm_content=TAA320, https://en.wikipedia.org/wiki/Integrated_circuit, http://www.slideshare.net/jeagrapher/integrated-circuit-49728958, https://en.wikipedia.org/wiki/TO-18, https://www.safaribooksonline.com/library/view/consumer-electronics/9789332503304/xhtml/ch20-sub4.xhtml, http://www.digikey.co.uk/product-search/en/integrated-circuits-ics?WT.srch=1&mkwid=5fL9AsOz&pcrid=3082756802&pkw=%2BIntegrated%20%2BCircuits&pmt=bb&pdv=c, https://en.wikipedia.org/wiki/Monolithic_microwave_integrated_circuit, https://www.safaribooksonline.com/library/view/consumer-electronics/9789332503304/xhtml/ch20-sub4.xhtml, http://www.ebay.com/sch/i.html?_nkw=philips+integrated+circuit, http://www.ebay.com/sch/i.html?_nkw=philips+integrated+circuit, http://www.ebay.com/itm/TAA370A-Original-New-Philips-Integrated-Circuit-/141311235114, http://www.wikipedia.or.ke/index.php/Integrated_circuit, https://en.wikipedia.org/wiki/JEDEC and https://en.wikipedia.org/wiki/TO-18
OPCFW_CODE
rpm: set PAGE_SIZE=65536 on CentOS 7/8 aarch64 Closes: #176 Arm64 with Ubuntu/AL2 and x86_64 with all OSes use 4096 page size but Arm64 with CentOS uses 65536 page size. So, by specifying --with-lg-size=16, explicitly set PAGE_SIZE=65536 on such environment. This commit fixes the following errors. fluentd[15556]: : Unsupported system page size fluentd[15556]: : Unsupported system page size fluentd[15556]: [FATAL] failed to allocate memory It seems ok on arm64v8 even though --with-lg-page=16 is enabled. (td-agent starts without error) % docker run -it arm64v8/centos [root@64d8fbc653f8 /]# uname -a Linux 64d8fbc653f8 5.7.0-2-amd64 #1 SMP Debian 5.7.10-1 (2020-07-26) aarch64 aarch64 aarch64 GNU/Linux [root@64d8fbc653f8 /]# getconf PAGESIZE 4096 export LD_PRELOAD=/opt/td-agent/lib/libjemalloc.so /usr/sbin/td-agent On EC2 M6G centos7 instance, It seems that --with-lg-page=16 binary works. $ rpm -q kernel kernel-4.18.0-147.8.1.el7.aarch64 $ sudo rpm -ivh td-agent-4.0.0-1.el7.aarch64.rpm 準備しています... ################################# [100%] 更新中 / インストール中... 1:td-agent-4.0.0-1.el7 ################################# [100%] Created symlink from /etc/systemd/system/multi-user.target.wants/td-agent.service to /usr/lib/systemd/system/td-agent.service. prelink detected. Installing /etc/prelink.conf.d/td-agent-ruby.conf ... [centos@ip-172-31-28-205 ~]$ sudo systemctl start td-agent [centos@ip-172-31-28-205 ~]$ sudo systemctl status td-agent ● td-agent.service - td-agent: Fluentd based data collector for Treasure Data Loaded: loaded (/usr/lib/systemd/system/td-agent.service; enabled; vendor preset: disabled) Active: active (running) since 月 2020-08-17 02:33:22 UTC; 9s ago Docs: https://docs.treasuredata.com/articles/td-agent Process: 6687 ExecStart=/opt/td-agent/bin/fluentd --log $TD_AGENT_LOG_FILE --daemon /var/run/td-agent/td-agent.pid $TD_AGENT_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 6693 (fluentd) CGroup: /system.slice/td-agent.service ├─6693 /opt/td-agent/bin/ruby /opt/td-agent/bin/fluentd --log /var/log/td-agent/td-agent.log --daemon /var/run/td-agent/td-agent.pid └─6696 /opt/td-agent/bin/ruby -Eascii-8bit:ascii-8bit /opt/td-agent/bin/fluentd --log /var/log/td-agent/td-agent.log --daemon /var/run/td-agent... 8月 17 02:33:21 ip-172-31-28-205.ap-northeast-1.compute.internal systemd[1]: Starting td-agent: Fluentd based data collector for Treasure Data... 8月 17 02:33:22 ip-172-31-28-205.ap-northeast-1.compute.internal systemd[1]: Started td-agent: Fluentd based data collector for Treasure Data. It seems ok on arm64v8 even though --with-lg-page=16 is enabled. (td-agent starts without error) Yes, larger baked page size should also support smaller run-time page size. refs: https://github.com/jemalloc/jemalloc/issues/467 https://github.com/jemalloc/jemalloc/pull/769 :tada:
GITHUB_ARCHIVE
The Gregorian calendar that we use today is a compromise between having leap years spaced as evenly as possible and keeping the calculations relatively simple. (The older Julian calendar, which had a leap year every 4 years, was also such a compromise, one with a greater bias toward simplicity.) In the Gregorian calendar, every 4th year is a leap year, except that every 100th year is not a leap year, except that every 400th year is a leap year. The cycle repeats every 400 years, with 97 leap years in each cycle. But they're not distributed as evenly as they could be; most successive leap years are 4 years apart, but some of them are 8 years apart. The Gregorian calendar assumes that a year is 365.2425 days. In fact, it's slightly less than that; WolframAlpha says it's 365.242190419 days. What you're suggesting, I think, is a calendar in which leap years are always either 4 or 5 years apart, and are as evenly distributed as possible, so that the difference between the calendar and the astronomical year are minimized. To do this, start with a 365-day year, and keep track of the offset between the calendar the astronomical year. This offset increases by 0.242190419 days every year. - Year 0: offset = 0 - Year 1: offset = 0.242190419 days - Year 2: offset = 0.484380838 days - Year 3: offset = 0.726571257 days - Year 4: offset = 0.968761676 days - Year 5: offset = 1.210952095 days - Add a leap day, recompute offset = 0.210952095 days Every time the offset exceeds 1 day, add a leap day and subtract 1 day from the offset. This will keep the offset as small as possible over time, but at the expense of easy predictability; it becomes much more difficult for those who are not mathematically inclined to understand when the next leap year is going to be. And the calendar would presumably have to be adjusted as the number of days in a solar year is computed more precisely, or as it changes as the Earth's rotation rate changes slightly. Assuming that the figure of 365.242190419 is correct, your calendar would repeat itself, not every 400 years, but every billion years. An alternative would be to keep the Gregorian calendar's assumption of 365.2425 days per year, and just distribute the leap years more evenly through the 400-year cycle. For example, starting from 2000, you have leap years in 2000, 2004, 2008, 2012, ..., 2128, 2133, 2137, 2141, ..., 2265, 2270, 2274, .... In either case, most of the time leap years would not be years that are multiples of 4, which is a nice property of the Julian and Gregorian calendars. That's the math, but it's absolutely trivial compared to the politics. I'm afraid there's very little chance that such a system would be widely accepted. The Gregorian calendar, which was a clear improvement over the Julian calendar, was introduced in 1582, but it wasn't fully adopted worldwide until the late 1920s (and there are still pockets of resistance). There are tremendous advantages in the fact that almost the entire world uses the Gregorian calendar, and it will remain within a day or so of the astronomical year for the next several thousand years. Your proposed calendar has the advantage of remaining very slightly closer to the astronomical year, but the costs of universal adoption would be tremendous, and the costs of partial adoption would be even worse.
OPCFW_CODE
from typing import List import math from exceptions.CheaterException import CheaterRecognized from exceptions.ANDTripleConditionException import ANDTripleConditionFalse def xor(a: bytearray, b: bytearray, c=bytearray(0), d=bytearray(0), e=bytearray(0)) -> bytearray: """ :param a: :param b: :param c: :param d: :param e: :return f: bit wise xor of a and b :rtype bytearray """ l = max(len(a), len(b), len(c), len(d)) # do padding with 0 if the variables doesn't have the same length if len(a) < l: a = bytearray(l - len(a)) + a if len(b) < l: b = bytearray(l - len(b)) + b if len(c) < l: c = bytearray(l - len(c)) + c if len(d) < l: d = bytearray(l - len(d)) + d if len(e) < l: e = bytearray(l - len(e)) + e f = bytearray(l) for i in range(l): f[i] = a[i] ^ b[i] ^ c[i] ^ d[i] ^ e[i] return f def print_output(proto): bin_out = "" dez_out = 0 i = 0 for res in proto.outputs: tmp = int.from_bytes(res.output, byteorder='big') bin_out += str(tmp) dez_out += tmp * pow(2, i) i += 1 if i == 32: break tmp = dez_out.to_bytes(4, byteorder='big') dez_out = int.from_bytes(tmp, byteorder='big', signed=True) print("\nResult in binary: " + bin_out[::-1]) print("Result in decimal: " + str(dez_out)) return dez_out def id_in_list(id: int, lists: List[List[List[int]]]) -> bool: for list in lists: for l in list: if id in l: return True return False def AND(a, b): """ :param a: :param b: :return c: bit wise and of a and b :rtype bytearray """ l = max(len(a), len(b)) # do padding with 0 if the variables doesn't have the same length if len(a) < l: a = bytearray(l - len(a)) + a if len(b) < l: b = bytearray(l - len(b)) + b c = bytearray(l) for i in range(l): c[i] = a[i] & b[i] return c def check_and_triple(and_triple_A, and_triple_B, delta_a: bytes, delta_b: bytes, full_check=False): """ Checks two AND triples if they are corrct :param full_check: :param and_triple_A: first AND triple as protobuf message :param and_triple_B: second AND triple as protobuf message :param delta_a: delta from person A as bytes :param delta_b: delta from person B as bytes """ # check first bit from A if and_triple_A.r1 == b'\x01': if and_triple_A.M1 == bytes(xor(and_triple_B.K1, delta_b)): pass # print("Correct bit 1 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 1 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() else: if and_triple_A.M1 == and_triple_B.K1: pass # print("Correct bit 1 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 1 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() # check second bit A if and_triple_A.r2 == b'\x01': if and_triple_A.M2 == bytes(xor(and_triple_B.K2, delta_b)): pass # print("Correct bit 2 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 2 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() else: if and_triple_A.M2 == and_triple_B.K2: pass # print("Correct bit 2 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 2 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() # check first bit from B if and_triple_B.r1 == b'\x01': if and_triple_B.M1 == bytes(xor(and_triple_A.K1, delta_a)): pass # print("Correct bit 1 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 1 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() else: if and_triple_B.M1 == and_triple_A.K1: pass # print("Correct bit 1 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 1 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() # check second bit B if and_triple_B.r2 == b'\x01': if and_triple_B.M2 == bytes(xor(and_triple_A.K2, delta_a)): pass # print("Correct bit 2 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 2 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() else: if and_triple_B.M2 == and_triple_A.K2: pass # print("Correct bit 2 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 2 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() if full_check: # check third bit A if and_triple_A.r3 == b'\x01': if and_triple_A.M3 == bytes(xor(and_triple_B.K3, delta_b)): pass # print("Correct bit 2 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 3 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() else: if and_triple_A.M3 == and_triple_B.K3: pass # print("Correct bit 2 A. ID: " + str(and_triple_A.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 3 A. ID: " + str(and_triple_A.id)) raise CheaterRecognized() # check third bit B if and_triple_B.r3 == b'\x01': if and_triple_B.M3 == bytes(xor(and_triple_A.K3, delta_a)): pass # print("Correct bit 2 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 3 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() else: if and_triple_B.M3 == and_triple_A.K3: pass # print("Correct bit 2 B. ID: " + str(and_triple_B.id)) else: print(and_triple_A) print(and_triple_B) print("Cheat bit 3 B. ID: " + str(and_triple_B.id)) raise CheaterRecognized() # check and triple condition if xor(and_triple_A.r3, and_triple_B.r3) == AND(xor(and_triple_A.r1, and_triple_B.r1), xor(and_triple_A.r2, and_triple_B.r2)): pass else: raise ANDTripleConditionFalse() return True
STACK_EDU
As is well known, there are three types of cloud computing: software as a service (SaaS), such as Salesforce.com; platform as a service (PaaS), such as Google App Engine; and infrastructure as a service (IaaS), such as Amazon’s AWS. Google has been in the PaaS space but recently dived into IaaS. A great thing about living in Silicon Valley is that local meetings with the movers and shakers of technology market segments are within an easy 30-minute drive. The meeting that prompted this blog was on cloud computing. Cloud computing is changing the way IT is delivered and thereby changing how and where we conduct computing. Consequently, it has a vast impact on the way we use energy. Some conclude cloud computing saves energy, while others are skeptical about it. In any event, Google, a mammoth of computing power, has entered another category of cloud computing, so I attended SVforum’s cloud computing and virtualization special interest group meeting on Google’s new service. (By the way, Google Compute Engine was announced about a month ago at the Google IO conference.) The SIG meeting was well attended; more than 100 people showed up. These were the speakers: - Marc Cohen and Kathryn Hurley of Google, developer relation engineers - M. C. Srivas of MapR, cofounder and CTO Dave Nielsen, one of the cochairs of this SIG, and others did a good job organizing this meeting. Normally, when a new product or service is introduced, a vendor or provider gives a dry presentation about how great their product or service is. Google wanted to show how solid their high speed and security offering is, so they included a couple of good demos. Well, that was not enough. On top of that, they invited MapR, one of their partners, to share their experience in deploying MapR’s version of Hadoop on the Google Compute Engine platform. Hadoop is open source from the Apache foundation and is used to crunch Big Data. MapR took Hadoop and added features to make it compatible with enterprise requirements. Big Data and Hadoop have opened up a tremendous number of market opportunities, so MapR is not the only one to exploit them; Cloudera and Hortonnetworks also provide Hadoop on steroid versions for enterprises. By the way, I found a research report by Dave Ohara, The big machine: creating value out of machine-driven bigdata, to be very good tutorial material on Big Data. MapR also has deployed its engine on Amazon AWS, and a performance comparison would be interesting. During the discussion, MapR revealed their performance data but said they had not conducted such a performance benchmark on AWS. Back to Google Compute Engine. Google is no stranger to cloud computing. Here’s a simplified list of Google’s cloud offerings: Marc Cohen speaks about Google’s history of cloud offerings. What is Google Compute Engine? Marc summarized it on one slide: Marc Cohen presented an overview of Google Compute Engine. - Infrastructure as a service (IaaS) - Supports Ubuntu and CentOS (more operating systems, such as Windows, will come later) - KVM as hypervisor - Deployable in two territories (one eastern and two central time zone data centers, only in the same data center rather than inter–data centers) - In private beta (need to apply to be deployed on the platform; see here for more detail) - Free for now (only qualified users) but will be charged for later - No SLA guarantee now (under consideration when released officially) Marc did not elaborate on SLA, but judging from what he said, you need to specify which territory you want to deploy your load in. All the computing and data associated with it stays in the same data center (i.e., cloud). No matter how we improve technologies, we will still be bounded by the laws of physics, and we cannot send packets any faster than the speed of light. If they want to guarantee SLA, they need to make a lot of assumptions and impose restrictions on their customers. I have not heard any cloud service provider discuss SLA, and I wonder how they can guarantee it, even with conditions. More details can be found in their data sheet here, and here are two useful links: Some architectural information follows. The yellow disklike box behind Kathryn Hurley is cloud storage. One thing unique about this is the use of their command language interface with gcutil library. Now to benchmarks and usability shared by MapR. It is more convincing if your actual user, rather than yourself, says good things about your offering. Even though this is not a blog on Hadoop or MapR, here’s some basic information shared by Jack Norris, VP marketing, of MapR. He also summarized MapR’s deployment on Google Compute Engine, as follows: Srivas added the following, more-detailed benchmark: The one on the Google platform outperformed that hosted on a physical platform in every benchmark except for processing time: There was no comparison in cost. Srivas said jokingly that we had better think twice before purchasing and owning a bunch of servers. This is because a server’s life is probably only two to three years, and the minute you buy a new server, it becomes obsolete because new servers with new technologies are invented constantly. Spending so much money is one thing, but after your business goal changes and you no longer need that many servers, what do you do with them? They do not disappear magically. In the area of IaaS, AWS was way ahead of the market curve, followed by RackSpace and others. As Dave Nielson said at the beginning of the meeting, those who were working on IaaS (e.g., Amazon) are adding a PaaS solution (e.g., Elastic Beanstalk) and those who were in the PaaS market (e.g., Google) are adding an IaaS solution. The cloud market is still expanding, and in spite of some problems, such as lock-in (due to no standards), security worries, and lack of control, because of its wide and broad market, it is still growing rapidly. As long as mobile computing and sensor networks (such as smart grid) are growing, it does not seem that the end of growth is even near. When very few interoperability cloud platforms exist, it is always good to have competition.
OPCFW_CODE
When I joined to Microsoft Patterns & Practices Mobile Client Software Factory team a couple of years ago, I discovered test-driven development as a reality, but the first big issue we had to face then, was the lack of unit testing support for the .Net Compact Framework. Having so many testing frameworks for the full framework, the .Net CF was nobody's land regarding unit testing. Not even Visual Studio 2005, which includes on its Team Suite edition a nicely integrated Unit Testing framework provides support for .Net CF Unit Testing. There was only one option: build our own Unit Testing framework (or at least our test runner), and that was what we called the p&p Compact Framework Test Runner, which is part of the Mobile Client Software Factory. It was just enough effort to make things work. There was a big wish list for cool features that never were done like IDE integration. I want to mention also another old project with the same mission, the CFNUnitBridge by Trey. Now Visual Studio 2008 (formerly Orcas) has arrived, and it includes support for .Net CF Unit Testing which very nice IDE-integration, with the same approach of the full framework unit testing. Such a good news! Wanna try it? Let's do a remake of a very interesting Daniel Moth's post from a couple of years ago: 1) Let's create a new Smart Device Class Library for .Net CF 3.5 on Visual Studio 2008 2) Add a new class to the project with the following content: public class Class1 public Int32 GetBuildVersionPart() 3) Right Click on the method and choose "Create Unit Tests..." And select only the GetBuildVersionPart() method Enter the Test Project name It will create a new test project, unit testing class and the following unit test method: ///A test for GetBuildVersionPart public void GetBuildVersionPartTest() Class1 target = new Class1(); // TODO: Initialize to an appropriate value int expected = 0; // TODO: Initialize to an appropriate value actual = target.GetBuildVersionPart(); Assert.Inconclusive("Verify the correctness of this test method."); 4) Replace expected = 0 with expected = 50727 which is the build number for the full framework 3.5, and comment out the Assert.Inconclusive line (or delete it). 5) Run the GetBuildVersionPartTest from the Test View window: It will build the projects, launch the emulator and run the tests on the emulator. After some seconds, the result will be: This time the test has failed, the expected value was 50727, the build number for the full framework, but the actual value was 7283, the build number for the compact framework. This time the Unit Test has run on the emulator! It really was a .Net CF Unit Test :)
OPCFW_CODE
The best way to solve the problem with the recovery XP system file consoleJune 23, 2020 by Armando Jackson We hope that if you have restored the XP System File Recovery console on your system, this article should help you. - Launch Windows XP. - Insert the Windows XP CD into the drive. - Go get started. - Go and run. - Enter the following command, but replace e: with the letter of your computer’s optical drive: e: \ i386 \ winnt32.exe / cmdcons. - Press the enter key. - In the Windows Installer warning message, click Yes. How do I access Windows recovery console? - Reboot the computer. - After viewing the startup message, press the F8 key. - Select the option Restore your computer. - Click on the Next button. - Choose your username. - Enter your password and click OK. - Select the hint option. - When you are done with the command line, close the window. Recovery Console is a special startup method that you can use to solve problems that prevent the Windows installation from starting correctly on Windows. This method allows you to access files, format disks, disable services and other tasks from the console command line when the operating system is not loaded. It is recommended that you use the recovery console only after safe mode, and other standard boot options will not work. I think the recovery console is also useful in other situations. For example, when deleting malicious files, they are launched in safe and standard modes so that you cannot remove the infection. This tutorial will show you how to install and use recovery console. For those familiar with DOS or the command line, the recovery console is very familiar. For those who are not familiar with this type of environment, I recommend reading this introduction to familiarize yourself with this type of interface: I recommend installing the recovery console directly on the computer so that it is accessible in the future if necessary. The recovery console takes only about 7 megabytes, so there is no reason not to install it if you need it. When the recovery console starts, you will be asked to enter the administrator password before proceeding. In many cases, the recovery console does not recognize the administrator password if XP is preinstalled on your computer. In these situations, you can change the registry setting so that the recovery console does not ask for a password. This option works in both Windows XP Home and Pro. The recovery console is similar to the standard command line, but differs from it. Some teams work, others do not, and new teams are available. There is no graphical interface, and all commands must be entered by typing them on the command line of the console and pressing the Enter key. This can be confusing for those who don’t know this type of interface, but after a few commands it becomes easier. Here is a list of available commands that you can use in the recovery console. If you are using the recovery console, you can enter help and then the command to see a more detailed explanation. For example: help attribute. Warning. To remove the recovery console, you need to modify the Boot.ini file. If you edit this file incorrectly, your computer may not start correctly. Please try this step only if you want. windows xp repair commands chkdsk - hal dll - boot ini - c windows system32 config system - windows 2000 - restore points - win xp - command prompt - administrator password - xp professional - windows vista - hard drive - xp sp3 - microsoft windows - Recover Files After A System Restore - Restore My System Files Windows 98se - Command Restore Windows Xp System Files - Files Hidden After System Restore Windows 7 - Bios Update Recovery Console - Windows Restore Virus System Restore - How To Enter System Recovery In Windows 7 - How To Make System Recovery Disk Vista - Fix System Restore Errors - Window Xp System Restore From Dos
OPCFW_CODE
Web App Development Agile web development that gets investors lined up Ruby is a modern programming language that allows developers to build all kinds of applications, from small scripts to enterprise web platforms. Ruby on Rails is a framework – a set of tools built on top of Ruby, aimed at developing web applications quickly. Some of the benefits of using this technology are: Although Rails is the most popular web development technology for Ruby, some projects require using alternative frameworks. Depending on your product and product needs, we will help you choose the right tool for the job. We have experience in working with Grape, Sinatra, Hanami and are open to exploring other options We recommend Elixir for the development of scalable and maintainable applications. Elixir’s major advantage is that it is extremely scalable. Thanks to its extensible nature web developers increase their productivity by naturally spreading the language to specific domains. Elixir is also a programming language that comes with a sizeable set of tools that facilitate development.verywhere. All these features belong to the bulletproof platform - Erlang - that powers the most reliable online services on the globe. We commonly use Elixir for the development of communication platforms (e.g. telecom, chats, video services). We will discuss your product vision with you and together we will decide if we’re a good fit for your project. If we are, we’ll send you a project estimate for your review. We’ll pick the right mobile app development team for you and start your project using Scrum – an Agile development process. We offer you Agile Ruby on Rails development teams who deliver web solutions to suit your needs. Build an MVP, create APIs, scale your app or design interfaces - All under one roof. Build your MVP (Minimum Viable Product) from scratch with the help of our startup Ruby on Rails development team. Get in the market quickly with the best results using Agile methodologies. APIs allow your mobile application to connect to the backend, where all your data lies and your business logic is processed. It’s the glue that connects all mobile devices and platforms. We (or if requested, you) will test your idea. At this point, it’s important to scale the app by adding new features, improving Ruby on Rails development speed and prepare for an increase in users. See progress day-by-day with continuous delivery. With over 300 successful project launches, we’ve got the experience to deliver the results you want. We have formalised best coding practices to ensure the scalability of your project. The code standards that all our developers adhere to are designed to allow your project to be scaled easily in the future - You will not have to make big changes in the code-base, to make big changes in the application. Our team is passionate about creating high-quality products that will turn your idea into a scalable business.
OPCFW_CODE
Do With This Threat Intelligence? How should newsletter, use the advanced search, subscribe to threads and access many other special features. returns a particular return code? Seems kind of silly that it have a peek here Comment Submit Your Comment By clicking you Join & Ask a form hydrogen-bonded dimers like carboxylic acids? Section of a book that explains things The need the immediate directory. My 1.4.2 java on my other jail server is installed Thanks! I have not >> figured out why yet, having removed had added /usr/lib/java/lib/i386 to /etc/ld.so.conf ... Browse other questions tagged centos Tango Desktop Project. Mapping many-to-many relationship How can there be different religions Actual 1804 MB Passed Checking swap the jre java (or something else) ? Should ideal specular multiply Join our Error Could Not Find Libjava.so Solaris default java executable? I have already set Or run Code: . /etc/profile in a shell right before Anybody know how one would extract a .txz package on a http://www.ibm.com/support/docview.wss?uid=swg21439355 Being in Problem is I always get Error: could not find Error: Could Not Find Java 2 Runtime Environment ruined or removed from lens' most outer surface? using a 64-bit OS. Can libjava.so", "could not find Java 2 Runtime Environment". Attached are https://community.hpe.com/t5/General/Java-error-on-HP-UX-11-31/td-p/4273039 PATH="$PATH:/usr/lib/java/bin:/usr/lib/java/jre/bin" JAVA_HOME="/usr/lib/java" export PATH JAVA_HOME Did'nt work for me Shucks. Libjava.so Not Found Or run Code: . /etc/profile in a shell right before Error: Could Not Find Java Se Runtime Environment Linux Eric Alien Bob View Public Profile View LQ Blog View Review navigate here ContentLink is completely disabled once you log in. Registration is quick, show how to restore a database from backup after a simulated disk failure using RMAN. Reference #: 1439355 Modified date: 2011-07-18 Site availability Site assistance Contact and feedback Need support? Error: Could Not Find Libjava.so Ubuntu asked Dec 27 '15 at 8:55 Branden 112 What is your distribution? The time now had added /usr/lib/java/lib/i386 to /etc/ld.so.conf ... Share a link to this question Code: $ which -a java /usr/lib/java/bin/java /usr/lib/java/jre/bin/java I Check This Out am I? then doing installation. –Abidi Mar 15 '12 at 14:13 Thanks. Libjava.so is inside the lib dir and I thought that Jdk to clear all LQ-related cookies. Mapping many-to-many relationship When must for creating useful threat intelligence. to run java -version? Join the community of 500,000 wouldn't be included with the package.. space: must be greater than 150 MB. Why is the Greek definite All this contact form machine running windows. you're looking for? The answer Ice9 Linux - General 0 the Creative Commons Attribution Share Alike 4.0 International License. > Slackware [SOLVED] System not finding libjava.so User Name Remember Me? Join our community for more find Java 2 Runtime Environment. It also shows how to copy a user's permissions trademark of The Open Group. I am stumped I use #!/bin/bash and when #!/bin/sh? Report Inappropriate Content Message 1 of 4 (110 Views) Reply 0 at 10:23 add a comment| active oldest votes Know someone who can answer? Note: libjava.so exists and is located at the following: /usr/lib/jvm/java-6-openjdk/jre/lib/i386/libjava.so the jre java (or something else) ? for the discussion of Slackware Linux. Join Date Apr 2006 Location Timisoara, Romania Beans 132 DistroUbuntu 11.10 Oneiric
OPCFW_CODE
Network traffic to server keeps dropping out I have a production server that is responsible for controlling a factory. The server runs a number of control applications and a SQL server. The problem I have is that one of the applications that communicates with a PLC is reporting communication problems at seemingly random intervals. Using resource monitor, I have noticed the network activity drops sharply whenever this problem occurs. My VNC connection is not interrupted and the server responds to pings from other computers during the event/blip, however, other computers on the network which run applications connected to the SQL server are freezing till the network traffic restores. Screenshot of the resource monitor network graphs at the time of a blip. the first arrow is when we start experiencing communication problems and the second arrow is when things return to normal: I have analysed the SQL server at these times and there are no resource waits and the number of batches processed per second are also low. I also did a trace on the SQL server at the time of a blip but this did not reveal anything significant. At the moment just before the network activity drops there are no other indications this is going to happen. The CPU is low and memory is usage remains at about 70%. Could this be being caused by external factors affecting the network or maybe something wrong with the network card? Edit (additional information): This is a performance monitor for packets sent and received at the time of the blip: Could this be being caused by external factors affecting the network or maybe something wrong with the network card? - Yes it could. Do you have other systems connected to the same switch? If so, run resource monitor on one of those hosts and see if there's a corresponding dip. If there is then I would start taking a look at the switch and the network traffic. I ran the resource monitor on a different PC on the network. I didn't notice any massive change when the blib occurs, but there isn't that much traffic so I'm not sure if that's really telling me anything. This server is responsible for a lot of the traffic on the network. If the server has a problem and stops sending/receiving packets for what ever reason then the general network traffic as observed from another computer would also dip, so I'm not sure if focusing on the network switch is a bit of a red-herring. Try installing Wireshark (www.wireshark.org) you'll see what happens to each packet... I had a similar issue with my Exchange 2010 Server and after analysing the packets, i discovered that the issue was with IP Fragmentation which i resolved by reducing the MTU of the server's NIC. So, that might be a reason for the packet drops. Check this link: http://www.networkworld.com/community/blog/mtu-size-issues I ran wireshark on a different PC on the network. There appear to be a dip in traffic that corresponds to the blip but I cant be that sure. I have never used this tool before so I'm not sure what to filter out or look for. Also, the adapter MTU is currently set to 1500, which I think is probably correct. You can pause the capturing and then check each packet... The coloured packets (black, red mainly) show the faulty packets. You can expand the info on each packet to get more details on the issue. What i did was to google each error i observed I took a trace using wireshark but couldn't find anything obvious that was causing the problem. I have an opportunity to install the latest NIC driver soon. I am also going to try disabling Large Send Offload and TCP Checksum offload to see if that makes any difference.
STACK_EXCHANGE
Just one difference however is that the Groovy switch statement can manage any sort of swap value and various kinds of matching could be carried out. Most programming project difficulties are aggravating and difficult a result of the time involved and the various glitches that will come about in the course of the whole process of establishing the assignment. Several learners wrestle with these kinds of Laptop or computer science projects, and You're not a aloner In this instance. No matter if your troubles problem finishing the projects in time or simply just obtaining the do the job accomplished productively, you can find support here at Assignment Pro, in which We're going to supply you with qualified gurus to help within your programming thoughts. All these projects are rather foolish, but the point is that they were being appealing to me at that time. Finding confused though Mastering every one of the appropriate principles for the Java assignment? We will help you end your programming assignment on Java with Qualified help. No matter whether it’s for just a client server or based on a GUI, our crew can help save the day with their beneficial services. This schooling is filled with authentic-daily life analytical troubles which you'll master to unravel. Some We are going to address collectively, some you will have as homework workout routines. During this guide all commands are offered in code bins, exactly where the R code is printed in black, the comment textual content in blue as well as the output generated by R in green. All comments/explanations get started with the typical remark sign '#' to prevent them from becoming interpreted by R as instructions. In style checked manner, approaches are resolved at compile time. Resolution operates by title and arguments. The return variety is irrelevant to strategy assortment. Varieties of arguments the original source are matched against the kinds from the parameters adhering to Those people policies: I took support for my Promoting System assignment and tutor produce a wonderfully created marketing prepare ten days prior to my submission date. I received it reviewed from my professor and there have been only tiny changes. Terrific work guys. Money-again warranty – Issues transpire. Maybe your professor has canceled the endeavor or you have previously done check these guys out it you. We'll send out your money back if You're not happy with the paper we have performed in your case. You will always be provided with added reviews, advices and components, that may preserve you lots of hours; Also, you could Get in touch with your helper straight away with our Stay chat solution. This can guarantee which you can move your suggestions pop over to this site and Ensure that your helper has a clear idea about what you may need. Project Profanity Editor: Think about it’s late during the night and you get an e-mail from your boss, requesting your help with a little something. I warranty after you may use my services you could not prevent your self to recommend my services to your buddies & Many others. I am not saying it by creativity, I'm stating it by analyzing my 3 a long time practical experience and more than ninety nine.4% pleased purchasers from all within the world.
OPCFW_CODE
Today’s tip talks about how to use offset display for multi-lift projects or sub-level caving projects. When working with a multi-lift project, sometimes there is an overlap between the two lifts. This makes it difficult to graphically view the two lifts together on a plan view section. An offset can be placed on the layout to shift it so that the lifts are no longer displayed with an overlap. The images below show the plan view of a multi-lift layout without and with the offset display. Assigning Drawpoints to Each Lift The Sector, Production Block, or Group field in the draw point workspace can be used to identify each lift. In this example the Sector field will be used to map drawpoints from Lift1 and Lift2. This step only needs to be done if the drawpoints have not already been assigned to each lift There are many methods to assign drawpoints to sectors. The example below will show how to do it through Excel. The Sector field can be set by exporting the draw points to Excel. Go to PCBC > Drawpoints > Export Drawpoint Data to Excel. Below is a view of the drawpoint data once it is exported to Excel. Drawpoints on a different lift can be identified by their elevation and the Sector field can be set according to the elevation. The drawpoint name could also be used to identify each lift depending on the naming convention that is used. Once the sector field has been updated, you can import the drawpoints back into PCBC. Go to PCBC > Drawpoints > Import Drawpoints from Excel. Select the Excel file and sheet with the drawpoint data. Define offset values The offset for each lift is defined through an offset bucket. The bucket assigns an offset value to each drawpoint. The offset value for each lift is set up in Excel. In Excel, offset values are defined for each sector. In this example, LIFT2 will be offset 350m to the East of its current location. The excel sheet is defined below: Column A – Name of Sector, Production Block, or Group. Row 1 is the name of the grouping field Column B – Offset value in the X direction. Row 1 is the name of the bucket Column C – Offset value in the Y direction. Row 1 is the name of the bucket. Import these values as a bucket. Go to PCBC > Buckets > Transfer Bucket Data between Workspace and Excel. Select the Excel file and worksheet with the offset data. Select the option “Get data from Excel”. Select the workspace to import the bucket data into. Move Sector, OFFSETX, and OFFSETY over to the right hand side. Down at the bottom check on the option “Update draw points with value in field”. Select the Sector field and the Sector column in excel. This option will link the drawpoints based on the sector field. In this example the name of the bucket will be OFFSETX and OFFSETY. Below is a view of the bucket data in the Data Editor. Displaying the offset To display the drawpoint layout with an offset, an OFFSET advanced profile needs to be created. The value in the advanced profile will be the name of the bucket. Once the offset advanced profile is created, it needs to be applied to the caving area. Right click on the caving area and select the Block Cave Area Properties. Update the Advanced profile to the OFFSET advanced profile by selecting it in the drop down list. The offset display can be easily toggled on and off. Go to the Display tab in the Project View Window and check on the box “Use offsets when displaying data”. The images below show the display of a three lift project without and with the offset display. Without offset display With offset display For more PCBC tips, check out:
OPCFW_CODE
I've been absent from all my fictional writing projects for about a week now. I've just been so busy testing distros, and then when I finally found one, I had to go through a learning curve just to set things up the way I like it. I just want to say that I'm really proud of how smol.pub, midnight.pub, and nightfall.city have grown. Like these aren't even my projects; I'm not the owner. But I'm just so damn proud to see new content on them everyday. Content is the key. Without content, any site will die off sooner or later. So I'm happy to see all these individuals contributing to these projects with their posts. I've seen a lot of sites which died because of lack of content and active users. I distinctly remember one which had a great design. It had a forum, groups, user profiles and even live chat. It was fucking awesome! What it didn't have was active users. I think it stayed up for around 3 years? Which is a really long time considering that there were only about three active users in total. Then one day I visited the site and it was no longer online. The admin spent the first few months developing and setting everything up, and then she went away. She rarely came online, and I think with such a small site, everyone got inspired to see each other online. And it crushed their enthusiasm when one by one people fell off and the admin wasn't around for months at a time. The site had spawned as a result of another which had been dissolved after about a decade. I remember three sites had launched development to take over the "homeless" users. One of them looked so cool, it even had an online radio. Boy! I was so pumped up for it! I was on the waiting list for all three. Two sites went live, one of them missed the deadline. Only one site absorbed the majority of the users, the other one sat empty for weeks, then months, then years. The one that missed the deadline officially never opened. I think the owners got scared of the competition and decided to abandon the project. In my opinion, it was worth a shot. I think if it had opened log ins, it could quite possibly have been the next big thing. Y'know what's so interesting? Of the two sites that went live, the one that was almost finished got very few active users. The majority of the users flocked to the sites that was very much in beta stage. Users could only make posts, like and comment and nothing else. I found it so weird that people chose to continue using it while waiting for the development to finish up rather than use the site that was already complete. If it was my project? I would never have killed it off. I would've changed the name, got a new domain name, done some marketing on Reddit and 4chan, and got things up and running! Another thing that would've helped was to get some mods whose primary function would be bringing in more members. Like I've seen so many niche sites that spawned out of nothing and grew into commendable projects. About a month ago, I came across this myspace clone site? Spacehey! Yeah, took a few seconds to google that. It looks great! I haven't joined it. I've been a bit skeptical about it's privacy policies and stuff. I asked one of the admins who hosts a couple of great sites if he'd be interested in hosting a clone of it. If he had, I'd be all over it! I'd be posting on it like I was in the 2000's or something. I think the most important takeaway from all these is that the users make a site. If you've got great users, posting quality content, you're on a roll. BUT, at the same time, if you've got a large number of users, who ain't shit, don't post nothing, and just idle around? Your site's dead. Like trust me buddy. Without quality content, it's just a matter of time before you kill the site. So people are important. Quality people are very important.
OPCFW_CODE
41 of 45 people found the following review helpful For the Technically Inclined, This review is from: Microsoft Windows 7 Professional [Old Version] (Software) This is the full version of Windows 7 Professional, so chances are you're looking at it because you are building a new computer and plan on putting this on it. If you're wanting to upgrade your old XP or Vista computer and start from scratch, it's not like the old days where you could only format by using a full version. Save yourself some money and purchase the upgrade version: it will still offer you the ability to do a "clean" install and jettison the old Windows baggage. I HIGHLY recommend you stop by Microsoft's website and look the different versions over to confirm you have the right one. If you're running a computer old enough that you're upgrading from Windows 98 or Windows 2000, I don't recommend it: your performance will drop and you'll see compatibility problems, some of which may be major. If this is the case, stop by Microsoft.com, grab their "Windows 7 Upgrade Advisor", and run it first. That said, Windows 7 Home Premium is probably the best bet for the average home user. Unlike XP Home, which made basic things like networking a pain, or Vista Home, which really seemed to only be missing some eye candy, 7 Home Premium truly is aimed at the everyday consumer. Professional is going to be more suited to a corporate environment, or if you are an individual, you will probably want Professional for a personal machine that you regularly use to interact with a workplace. Pro gives you: 1) complex networking made simpler (for example, connecting to AD domains and/or interacting with your workplace/corporate networks) 2) "XP Mode" - which runs a program within a virtual version of XP. You still have the ability to use "XP Compatibility Mode", which fools your programs into thinking they're running in XP, but the XP Mode is an honest-to-god XP shell that runs within Windows 7. Your hardware will need to support "Virtualization Technology" in order to take advantage of this. 3) Automated backup (which can be done using free tools such as Macrium Reflect if you'd rather save the money) Ultimate also adds: 4) Hardware-level encryption (and your hardware will need to support this) 5) Native multi-language support: which means you switch from one language to another on the fly and need to make things easier on yourself than they were when you used the Language Bar in XP or Vista If you're a typical home user, chances are you'll be perfectly happy with Windows 7 Home Premium. If you're an avid gamer who often has to rig that favorite game *just so* in order to get it to run, you might consider Windows 7 Pro to ease your headaches. Ditto if for some reason you have a lot of older "barely XP compatible" programs that you think might be completely unable to function in just "XP Compatibility Mode" (check user forums first). And of course, if you have a desktop or laptop that you often use to connect to work with, check with your support guru, and he'll tell you whether you need to go with the Pro. Probably one of the biggest advantages with Windows 7 over earlier versions of Windows is that it makes sorting out your networks easier: specifying whether a network is a Home, Work, or Public network means Windows will be more open about sharing across your machines at Home, easier to access your files at Work, but much more cautious about information when you're on a Public network (which means, free Wi-Fi hotspots and the like). This will help you a lot if you're jumping from a home environment to a work environment on a regular basis. It also helps Windows decide how often to nag you about security: more when you're connected to a Public network, less if you're at Home or securely connected to Work. If you dealt with that in Vista, you'll be relieved at the fewer security nags in 7. Another huge improvement is "Windows XP Mode". If you have a program that is picky, XP Mode will run it in a native XP virtual environment. This doesn't have to mean the user has to know how to manage Virtual Machines; it can be configured to be localized to the program: stop by CNET TV, and look at the video "Windows 7 video: Windows XP Mode" for an excellent explanation and demo). If you're putting this on a new computer that you want to use to replace an older computer, you will be happy to know that you can migrate all your old stuff from one machine to another. Search Microsoft.com for "Windows Easy Transfer": this program will let you migrate your old user accounts and settings to the new machine very simply. If you want to take it a step further--for example, you have a lot of programs on the old machine you don't want to have to sit down and reinstall one by one on the new machine--you can migrate all of them for $19.95 using Laplink PC Mover: their website is currently offering a special version that is just designed to migrate one computer to Windows 7 one time. Read the documentation very carefully and keep a copy of it handy as I've no room to go into detail on that here. So what are some of the things Microsoft doesn't tell you in the description above? Windows 7 isn't just "fixed Vista": it's a full overhaul of Windows based on a ton of feedback collected directly from Beta and RC 1 users (of which I was one--I let 'em have an earful and I think they actually listened) Windows 7 does things drastically different from XP in that, like Vista, it does a lot of the eye candy in a smoother way. XP and earlier used to send graphics work through your processor before it'd get to your video card...now, it bypasses the processor and goes straight to the video, clearing up what was a pretty substantial bottleneck. This system was imperfect (to say the least!) in Vista, but it's been improved here, particularly in the area of being compatible with older games Windows 7 is trying to slowly "trim the fat" we normally have to put up with by making itself more compatible with other devices. Where you typically have to install a new device by running the manufacturer's setup disc, installing a bunch of junk and tray icons, and etc., Microsoft is making native support more common. My sound card, for example, used to need about 5 or 6 "helper" programs that would drain my performance and occasionally annoy me. Now it's just using the drivers that came with the installation of 7. New Operating Systems are always a bumpy road: your journey might not be as easy as others. However, compared to previous Windows releases, Windows 7 is a substantial improvement, and I'm pleased to say that I haven't been burned by 7 like I was with Vista (and Windows Me--agh, the horror, the horror). If you just want to get yourself onto the 7 platform and don't need a lot of customization, Windows 7 Home is a great place to start. If you need more for your work environment (or you are building a workplace environment), then 7 Pro is the way to go.
OPCFW_CODE
Add CapabilityStatementIndexer Add Indexer for CapabilityStatements Tests for same Update SearchParameterResolver to respect the index if present Related to the SNAPSHOT being updated from a feature branch, looks like that happened back when .travis.yml was updated to use bash scripts. Before the change, deployments were only for develop and master: https://github.com/DBCG/cql_engine/commit/1431958c2e031f6ff9df95bde17fa373f4cf9d57#diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 After the change, deployments happen regardless of the branch in the bash script. Not sure if the behavior change was intentional, but wanted to point out that the behavior did used to be different. Related to the SNAPSHOT being updated from a feature branch, looks like that happened back when .travis.yml was updated to use bash scripts. Before the change, deployments were only for develop and master: https://github.com/DBCG/cql_engine/commit/1431958c2e031f6ff9df95bde17fa373f4cf9d57#diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 After the change, deployments happen regardless of the branch in the bash script. Not sure if the behavior change was intentional, but wanted to point out that the behavior did used to be different. It should not be publishing from feature branches. I'll fix that. On Fri, Jan 15, 2021, 9:00 AM Matthew Sargent<EMAIL_ADDRESS>wrote: Related to the SNAPSHOT being updated from a feature branch, looks like that happened back when .travis.yml was updated to use bash scripts. Before the change, deployments were only for develop and master: 1431958 #diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 https://github.com/DBCG/cql_engine/commit/1431958c2e031f6ff9df95bde17fa373f4cf9d57#diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 After the change, deployments happen regardless of the branch in the bash script. Not sure if the behavior change was intentional, but wanted to point out that the behavior did used to be different. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/DBCG/cql_engine/pull/440#issuecomment-761027068, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACNJW7VSDQQMJFSFX4SLIQLS2BRBZANCNFSM4WDJKYSQ . It should not be publishing from feature branches. I'll fix that. On Fri, Jan 15, 2021, 9:00 AM Matthew Sargent<EMAIL_ADDRESS>wrote: Related to the SNAPSHOT being updated from a feature branch, looks like that happened back when .travis.yml was updated to use bash scripts. Before the change, deployments were only for develop and master: 1431958 #diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 https://github.com/DBCG/cql_engine/commit/1431958c2e031f6ff9df95bde17fa373f4cf9d57#diff-6ac3f79fc25d95cd1e3d51da53a4b21b939437392578a35ae8cd6d5366ca5485L25 After the change, deployments happen regardless of the branch in the bash script. Not sure if the behavior change was intentional, but wanted to point out that the behavior did used to be different. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/DBCG/cql_engine/pull/440#issuecomment-761027068, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACNJW7VSDQQMJFSFX4SLIQLS2BRBZANCNFSM4WDJKYSQ . I reverted the SNAPSHOT to the develop branch and disabled the build on this branch until I have time to look at it. Busy at the Connectathon today. Give you builds a shot. I reverted the SNAPSHOT to the develop branch and disabled the build on this branch until I have time to look at it. Busy at the Connectathon today. Give you builds a shot. This is going to rolled into a planning phase in the engine using a slightly different approach
GITHUB_ARCHIVE
import sys import re import subprocess if len(sys.argv) < 6: print sys.argv[0] +" cluster_size total_clusters latency partial_ip servers..." print "ex.: update_servers.py 4 2 1 172.16.52 2 3 4 5 6 7 8 9" sys.exit(-1) cluster_size = int(sys.argv[1]) total_clusters = int(sys.argv[2]) latency = int(sys.argv[3]) partial_ip = sys.argv[4] servers = sys.argv[5:] total_servers = len(servers) initial_global_port = 11300 initial_local_port = 11500 port_step = 10 if total_servers < 10 else 5 for i in range(0, total_servers): servers[i] = int(servers[i]) if len(servers) != cluster_size *total_clusters: print "servers are not cluster size times total clusters" sys.exit(-1) print "cluster_size: %d"%cluster_size print "total_clusters: %d"%total_clusters print "latency: %d"%latency print "partial_ip: " +partial_ip print "servers:" print servers # create global strings global_strings = [] #a list[server] print " global:" for i in range(0, total_servers): ip = "%s.%d"%(partial_ip, servers[i]) port = initial_global_port + (port_step *i) global_strings.append("%d %s %d"%(i, ip, port)) print global_strings[i] # create local strings local_strings = [] #a matrix [cluster][server] for cluster in range(0, total_clusters): print ' local: %d'%(cluster) local_strings.append([]) for i in range(0, cluster_size): server = servers[cluster +(i*total_clusters)] ip = "%s.%d"%(partial_ip, server) port = initial_local_port + (100 *cluster) + (port_step *i) local_strings[cluster].append("%d %s %d"%(i, ip, port)) print local_strings[cluster][i] # update lines from files by replacing strings def update_lines(lines, strings): strings_count = 0 for i in range(0, len(lines)): match = re.match('[0-9]+ ', lines[i]) if match: number = int(match.group()) if number < 7000: if number < len(strings): lines[i] = strings[number] +"\n" strings_count = number else: lines[i] = "#"+lines[i] else: lastline = lines[i] lines = lines[0:i] for string in range(strings_count+1, len(strings)): lines.append(strings[string] +'\n') lines.append(lastline) return lines return lines # write global file gfile = open('thesis/global/config/hosts.config', 'r') lines = gfile.readlines() gfile.close() lines = update_lines(lines, global_strings) gfile = open('thesis/global/config/hosts.config', 'w') gfile.writelines(lines) gfile.close() # write local files for i in range(0, total_clusters): lfile = open('thesis/local%d/config/hosts.config'%i, 'r') lines = lfile.readlines() lfile.close() lines = update_lines(lines, local_strings[i]) lfile = open('thesis/local%d/config/hosts.config'%i, 'w') lfile.writelines(lines) lfile.close() # latency #updates latency file lines def update_latency_lines(lines, string): for i in range(0, len(lines)): match = re.match('HOSTS=', lines[i]) if match: lines[i] = 'HOSTS=' + string + '\n' return lines # stop latencies for i in range(0, total_servers): cluster = i % total_clusters host = 'nova-%d'%servers[i] command = '~/addlatency%d.sh stop'%cluster subprocess.Popen(['ssh', host, command], shell=False) # write latency files if latency == 1: # set latency strings print "Latency hosts: " latency_strings = [] #a list of strings for cluster in range(0, total_clusters): latency_strings.append('"') for server in range(0, total_servers): if server % total_clusters != cluster: latency_strings[cluster] += '%s.%d '%(partial_ip, servers[server]) latency_strings[cluster] = latency_strings[cluster][:-1] + '"' print "HOSTS for local%d: "%cluster + latency_strings[cluster] for i in range(0, total_clusters): lfile = open('addlatency%d.sh'%i, 'r') lines = lfile.readlines() lfile.close() lines = update_latency_lines(lines, latency_strings[i]) lfile = open('addlatency%d.sh'%i, 'w') lfile.writelines(lines) lfile.close() # restart latencies for i in range(0, total_servers): cluster = i % total_clusters host = 'nova-%d'%servers[i] command = '~/addlatency%d.sh start'%cluster subprocess.Popen(['ssh', host, command], shell=False) # BAG_HOSTS print '' export_servers = 'export BAG_HOSTS="' for server in servers: export_servers += "%s.%d "%(partial_ip, server) export_servers = export_servers[:-1] export_servers += '"' print export_servers
STACK_EDU
It helped me a lot! Fortunately,this little work is really helpful for my coming burden in application design process! Kapat Daha fazla bilgi edinin View this message in English YouTube 'u şu dilde görüntülüyorsunuz: Türkçe. The IE engine then fails this action because we don’t have a policy entry for it in the custom zone for VC++ Wizards. weblink I also agree that the performance of these script driven dialogs is abismal, some 5000-times slower to execute that normal dialogs on my laptop! visual-studio-2008 internet-explorer script-debugging share|improve this question asked May 6 '09 at 16:30 Joshua Belden 5,08132141 I should have also clarified, I'm using Internet Explorer 8. I got the quote from the same casette I think - some kind of radio show. Any ideas? https://blogs.msdn.microsoft.com/vcblog/2009/03/28/some-vs2005-and-vs2008-wizards-pop-up-script-error/ As far as I could tell, it’s exactly the same as MSE (not to confuse it with the Script Debugger which is offered for free!)" Please give more information about the Now the error is back when selecting platform. It has something to do with some security setting in the wizard's custom internet zone. Update Internet Explorer In other words, if you are using VS 2008 and IE8, there is no longer a need to tweak IE options to debug your site. At the end of the page download, I did a refresh. Visual Studio Login Script Error At the end of the page download, I did a refresh. Why does the Developer Console show different extensions like "apxc" and "apxt"? Reply ms_sucks says: 8th September 2007 at 6:30 pm Thanks Bernie, this is by far the best debugging setup for IE that I've seen. Couldn't find anything about disabling script debugging. –Joshua Belden May 6 '09 at 16:58 Ditto: I've never seen a "Show all settings" flag. I've taken the liberty of putting it in bold. What happens now is hardly acceptable 8 years ago Reply Vyacheslav Lanovets Thanks for providing workaround. Most of the time what happens is whenever I leave VWDE to use another program like IE and then come back to VWDE it shows up with an all white window MS seems to have screwed up big time with all these different devlopement tools (mind you they have a new one every 2 years) and they have done absolutely pathetic job I typed ‘1027' in stead of ‘1207' in the registry. Visual Studio 2015 Sign In Script Error I'm customising an interactive map client (à la Google Maps) using Firefox with Firebug as my front-end development platform. Internet Explorer 9 For Windows 7 That's ironic. 8 years ago Reply Vijay I have the same issue as Jack. Is there an English idiom for provocative titles, something like "yellow title"? have a peek at these guys You are generating a script error, so the script error dialog is opening. Create an account EXPLORE Community DashboardRandom ArticleAbout UsCategoriesRecent Changes HELP US Write an ArticleRequest a New ArticleAnswer a RequestMore Ideas... Reply Joseph says: 29th March 2007 at 4:24 am Right now I use my browser and firebug to test my gadgets. Ie11 Download What is your OS/default IE ?Thanks. Thanks once again thank you for all your useful information and prompt response. The instructions should still work, let me know if you have any issues with them. Reply bernie says: 27th March 2008 at 9:28 am There is a Safari debugger but I've never used it. The work-around is to get VSE to launch IE for you, so that it owns the process and doesn't have to explicitly connect to it. Neither existing nor new ones show any problems to change the caption from the properties tool window. Until I read your post I was stabbing in the dark. Good luck, Rich Reply Francisco Brito says: 3rd May 2007 at 6:24 pm I've used a newer version of the Script Editor that comes bundled with Office. Yükleniyor... As developer evangelist for Microsoft Austria, he is doing workshops, trainings, and proof-of-concept projects together with independent software vendors in Austria based on .NET, Web Services, and Office 2003 technologies. I don't really like servicing policy of Visual Studio meaning that VS RTM is in fact of "beta" quality, SP1 is like stable release, but there is no polished SP2. Reply Greg Worthey says: 28th February 2008 at 4:14 am Rowan Atkinson was repeating something attributed to Charles Darwin: "A mathematician is a blind man in a dark room looking for You can use VWD to find out what is happening on the line that causes the error, or look up the line number from the error message. EDIT Edit this Article Home » Categories » Computers and Electronics » Internet » Internet Browsers » Internet Explorer ArticleEditDiscuss Edit ArticleHow to Fix Visual Studio 2005 and 2008 Wizards Script Not sure if it can generate bugs like this but worth checking. Is this some mysterious plot to stop people using MFC? (times are hard - I have to earn a living) Any idea what magic setting I set to fix this?
OPCFW_CODE
You are starting from an assumption that it seems a lot of people are ready to make, but is fundamentally wrong. You assume others place a value on life similar to your own - they do not. You assume that rich = bored evil-doer and I'm not even going to commment on that. On the first assumption, it is important to understand that folks like Saddam Hussein and just about every other person in power in the Middle East has a slightly different take on things that you do. There are the "faithful" and there are infidels. Infidels are not "people", they are obstructions to a goal. "Using up" the faithful in pursuit of a holy goal is acceptable - that is what Jihad is, after all. These folks do not have the same goals that Western civilization does. You can argue that Western goals are "wrong" and that we should adopt their goals instead, but that is a different subject entirely. Western civilization cannot "make peace" with these folks because they aren't interested in the same things. We want to contain or eliminate a threat and go back to shopping and watching TV. They want something different, and shopping isn't part of it at all. Frankly, we don't understand completely what they do want - it is outside of our understanding for the most part. So, assuming your average Iraqi, Palestinian or Iranian wants to live like your average American or someone in France is wrong. It is factually wrong - they don't - but even more so it is culturally wrong. It is typically Western to assume that because we want things that others do as well, and that they place a similar value on them. This is assuming "facts not in evidence" and building up from there. There is plenty of evidence to indicate they view killing infidel humans like we view killing rats. You put the traps out and empty them when they are full. You don't hold ceremonies for the dead rats. Unless they can somehow get around the idea that we're people too - just like them - the idea of a peaceful co-existance is as foreign to them as it would be for us to negotiate with rats. Now you may not like the realization that one day it is likely to come down to us vs. them - and only one will walk away, but that is the point of Jihad. Does this mean that every Muslim is our enemy who must be destroyed? No. It does mean there are people that have no intention of negotiating with us on reasonable terms and against these people we must win. At the same time, we must never lose sight that no matter how much they treat us like vermin to be exterminated, we must treat them like people. Does that mean we can't kill them? No, people get killed all the time. It does mean that we have to kill them as an enemy and not as we would exterminate vermin. Open Class Warfare As to the idea that somehow the rich should compensate the poor for bearing all the burdens in the world, well, that's silly. All a proposal like this would result in is a vast, uncontrollable riot. Don't you remember the 1960's? I guess not - you probably wern't born yet. If the "poor" have nothing to lose, they will destroy their infrastructure and everyone else's along with it. All this proposal does is show (with plenty of supporting evidence) that the "poor" are worth nothing as humans and only the "rich" count. Assigning a monetary value to human life is pretty low, even for crass Westerners. It would make us little better than the folks that want to destroy us. Considering the proposal more seriously, what you are asking is pretty much to tax the rich out of existance - or at least the "semi-rich" and force them to be "average". While that might seem to have a funny kind of justice to some folks, it isn't going to work. Rich people got that way in one of a few relatively simple ways - they fell into it, or they fought for it. If they fell into it, then maybe you might take it away without much of a fight. If they fought for it, they are likely to start over again and just become rich again. Wouldn't that really wreck things? If you change the rules so they can't win here, they will pick up and go somewhere else where they can win - and that society will receive the benefit of their efforts. We have seen this happen over and over again in the last 500 years or so - people with motivation and aptitude get pushed out of one society and end up somewhere else - where they succeed. The question you might like to ask instead is why everyone can't succeed. Don't tell me it is capitalism, because the same patterns have existed for thousands of years, regardless of the underlying economic system. Don't tell me it is because they are a minority and therefore somehow disadvantaged, because even in the US "White" does not equal "Rich", nor does "Black" equal "Poor". So what is it? Figure that out and maybe you have something worth talking about.
OPCFW_CODE
Did President Franklin Roosevelt appear in Buck Rogers? Did Franklin Roosevelt appear in the Buck Rogers comic strip, as suggested in the quote below? If so, in what issue, or on which date? From the In Roosevelt History blog: FDR: SPACE RANGER In March 1944, the publishers of the Buck Rogers in the 25th Century newspaper cartoon strip wrote to FDR asking permission to include a cartoon version of him in an upcoming strip. The proposed storyline had Buck Rogers exploring a new world and discovering a machine that could look back in time and compare good and evil. FDR, of course, was to be an example of humanity’s good, and Hitler and Japan’s Tojo were to be examples of evil. To sweeten the request, the publishers included a membership card making the President a member of the Buck Rogers Rocket Rangers! The White House gave permission for the proposed strip, but unfortunately we don’t know if it ever appeared. The original comics are being republished, which would make it easier to find this out. Unfortunately, volume 6 (http://www.amazon.com/Buck-Rogers-25th-Century-Newspaper/dp/1932563563/ref=sr_1_5?s=books&ie=UTF8&qid=1306994732&sr=1-5) was released yesterday June 1st, and only goes to 1938. It's likely that there will be 3 more volumes before 1944 - perhaps 6-8 months away? Volume 6 is now scheduled for release in February 2012. So it looks like it will be a long time before 1944 is republished. I wish I could answer this - I have the book: "Collected Works of Buck Rogers in the 25th Century" -- The last comix in it are from 1943, and take place on Planet X inhabited by the Monkey Men (who devolved from Japanese soldiers who escaped after ww2) The comic is deeply racist in depicting the monkey men, and I'd bet that this was leading to an FDR depiction. My book does not have the FDR appearance however. This question should be rephrased, "did he... and when", as there's no real indication it actually happened. I'm sure this is not much help, but I remember seeing a Buck Rogers strip reprinted years ago which depicted President Roosevelt as the president in the 25th century, and though it did not call him by name, the image was obviously a representation of Roosevelt. He was wearing a jumping belt or jet pack, and the interaction was brief, it was not part of a story line. I remember it because it was an anachronism. The strip never explained why the 25th Century president looked like Roosevelt, and as far as I know, it was a one-off. I don't remember which book I saw it in; it was a random reprint. Sorry for the lack of details, but this image does exist somewhere. There are no direct references to this comic on the internet, other than the blog post mentioned in the question. Having said that, the comic in question could be "Monkeymen of Planet X", because it features the Japanese, of whom FDR is meant to contrast. Also this comic was released during the time frame mentioned in the question and blog post. The other mention of the Japanese probably occurred in a comic strip before the permission letter to FDR. I don't see this as an especially helpful answer. The comic you're referencing is from 1943 whereas the letter to Roosevelt is from 1944; http://www.isfdb.org/cgi-bin/title.cgi?585565 The "Monkeymen of Planet X" story is published in The Collected Works of Buck Rogers in the 25th Century. FDR is not in that story. However, the next story in this collection is from 1946. Strips 134-151 are missing, perhaps FDR is in one of these. I should also note, it looks like the numbering restarted at some point in the late 30s or early 40s. Sorry I miss understood how the strips are numbered, just ignore that part. Hey Mark Rgers, Was Buck Rogers your grandpa? Why not ask one of your relatives and get back to us?
STACK_EXCHANGE
Who lives on the Shadow Plane? Important Note While I do understand that the DM is ostensibly God in his game. My GM does generally respect most things written in book. Explanation I have a character who is connected with the Shadow Plane in a E6 D&D3.5 Pathfinder Hybrid game, and I was talking to my DM who said that nothing lives in the shadow plane. However, I am sure that that I've read about some creatures that are native to the Shadow Plane. In addition, I think there's a template to turn creatures into Shadow creatures. Question What published examples can I show him within our scope that prove the Shadow Plane has significantly inhabited regions? I'd appreciate answers including page numbers for any referenced materials. Scope D&D 3.5 (No 3rd Party) + Pathfinder + E6 I would like to know of anything that is outside the scope of E6 however please note it as such because it will need extra special DM approval to use, as well as a good explanation as to why I should be allowed to use it. @Dakeyras: Thank you for the edit it made it much more readable and fixed a fuw typos. The more I think about it, on some level, this question doesn't really fit on this site, I think. It's basically asking for a list of Shadow Plane denizens, which is explicitly not on topic. You need to figure out if there is a specific answer that you're looking for. @KRyan agreed. The second question appears on topic but the first does not, according to the meta referenced. Yeah, sorry man but we try not to allow questions that are "list every single X where X is a big ass list." If you can refactor this into a question with a more definable best answer we can reopen it. There is no one "Shadow Plane"/"Plane of Shadow"/"Shadowfell" because it is only an archetype that different games use. There is no one canonical "shadow plane". It is the GM's discretion as to what to veto and what to include. Might seem harsh, but a GM has got to do what a GM has got to do. If it works for your game, good. The civilization most players are familiar with is the Shadar-kai. They first appeared in Fiend Folio, for D&D 3.0, and later the Monster Manual for 4th Edition. There are also a race of language worshiping humans called Illumians which were detailed in Races of Destiny. Forgotten Realms also has a race of humans called Shade who live in the Shadow Plane and who are much like the Shadar-Kai. MAny campaign setting's Shadow Plane have an ecology; From the Forgotten Realms Wiki The best-known inhabitant of the Plane of Shadow is the shadow. Others are generally evil and dangerous, including the dusk beast, the ecalypse, the nightshade, the shadow mastiff and the umbral banyan. From a Greyhawk Wiki Inhabitants of the Plane of Shadow include the khayal genies, dark ones, darkweavers, shadar-kai, shadelings, gloamings, shades, shadows, slow shadows, shadowswyfts, greeloxes, cloakers, illumians, nightshades, shadow dragons, and shadow demons. The Plane of Shadow from Eberron contains life too. Forgotten Realms Wiki may also be useful to look at for Shadar-Kai Lots of things live in the Shadow Plane. For example, the Dark template from Tome of Magic can be applied to any creature; it’s kind of the Shadow Plane equivalent to the Celestial and Fiendish templates. Unlike those templates, however, Dark is quite excellent: among other things, it gives Hide in Plain Sight, as an LA +1 template. (do note that RAW, not all forms of HiPS are the same, and Dark’s version only eliminates the need for concealment/cover, not the need to avoid direct observation – this makes it more like the Ranger’s Camouflage than the Ranger’s version of Hide in Plain Sight. Many DMs treat all forms of HiPS as covering both requirements of Hide, though) There are similar templates for Shadow Plane versions of creatures, but those aren’t as good (much more LA). In general, Tome of Magic has a lot of information on things from the Shadow Plane, because an entire third of the book is devoted to Shadow Magic. Note: I do not recommend the Shadowcaster; it is a rather underwhelming class.
STACK_EXCHANGE
I am looking for a method to find overlapping habitats of different species. For example I would like to find out where the Bobcat and Canadian Lynx have a lot of inaturalist sightings in similar areas. I can search and take note of areas I know to be overlapping habitats for one species and then check the other but I would love to be able to search all of them. Does anybody have any ideas how to go about this to produce a map with both (or more than two) species on it? I have searched github but haven’t found anything there. thanks in advance for the help You could export the data, import it to arcGIS or other GIS software, create buffers around the points, find where they intersect, and overlay a habitat raster. Alternatively you could clip the points to specific habitat types to see which species are distributed in specific habitats. I would love to hear if there is a quicker and easier solution though. Thanks for the ideas Michael. I too would love to hear a quicker and easier solution because what you suggest sounds like a lot of learning. While I understand your individual steps I have no idea how to implement them. However learning more about how to use GIS is on my todo list. Now the question is how does one do that? I can obviously just find the tax id’s and then adjust your URL. I imagine it is relatively easy but I seem to be missing something. That’s how I do it. If you had a bunch, you could script something up. Or there are ways to call up the map tiles in GIS programs, but that gets more complicated. Well thanks I will have to do it that way then. Glad you answered then because I would never have found the URL otherwise. Welcome to the forum @chasingwildlife And thanks for asking such a great question. I’m having a lot of fun working with the solution with other Felids and such. It is for the other cats that I am more interested in the answer. I want to find a spot in South America that will allow me to see as many cats as possible in a small area. The same in Borneo. Long term travel plans but not a bad way to find a good area to maximise my chances of photographing the wild cats. Adrian, Sounds like a fun project. Not sure if you know your way around searching for the taxa but here is South America Felids: Borneo Felids: https://www.inaturalist.org/taxa/map?taxa=922736,74753,41947,42033#6/1.582/113.709 Borneo is a great place to go to if you have not been. Thanks for that. You did all the work for me. I can get the taxa but it is a bit of a laborious process. Well I still have to do Africa. But I pretty much know those ranges off the top of my head. Also it is only the golden cat and the black-footed cat that are difficult to find. I have’t yet been to Borneo. Looking forward to the end of the Pandemic and the ability to plan travel again. yep, they are hard to see even in places where they are common on camera traps Very hard it seems. All and any hints and tips are welcome. This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
One function encrypts the text, and the other function decrypts it. We can do a short brute force attack trying each rotation of the alphabet until we see a plaintext message that makes sense. To encipher the letter P, locate it on the outer wheel and then write down the corresponding letter from the inner wheel, which in this case is B. The below image will help you understand the Symmetric Cipher Model. It will first go to the Encryption algorithm where a secret key is also taking part with algorithm. Symmetric Cipher Model Here, you can see that a plaintext is ready to be sent to the receiver. Kahn describes instances of lovers engaging in secret communications enciphered using the Caesar cipher in The Times. This consists of counting how many times each letter appears. The earliest surviving records date to the 9th century works of Al-Kindi in the Arab world with the discovery of frequency analysis. If the keyword is as long as the message, chosen randomnever becomes known to anyone else, and is never reused, this is the one-time pad cipher, proven unbreakable. As we can see, this is very easy to implement which also shows how easy it really is to break as well. If you are still having trouble, try the cryptanalysis section of the substitution cipher page. Cryptanalysis is the art of breaking codes and ciphers. The program will handle only English letters and each input text will not be longer that one sentence. To encipher your own messages in python, you can use the pycipher module. It uses two approaches to do this. The same can be accomplished by placing alphabets on two pieces of paper and sliding them back and forth to create a displacement. Here, it goes via a decryption algorithm which also needs the same shared key that is there on the sender side. First function gets one string into it, and modifies it. It has a plaintext that is to be encrypted in Ciphertext via some encryption algorithm, and sent via a secure channel to the receiver. Next, it does the following: Consinder signed versus unsigned The variable rotatorN is declared as an int which is a signed quantity and the scanf function will allow a user to enter a negative number such as To install it, use pip install pycipher. Given x is the current letters index in regards to our alphabet and n is the rotation: Function gets is used to read the input string from user. Again, it is very easy to break the encrypted text generated by this example. Sometimes it is enough to use one additional w. For a method that works well on computers, we need a way of figuring out which of the 25 possible decryptions looks the most like English text. The computer program that demonstrates the use of a Caesar substitution cipher displays alphabets on two lines that can be moved back and forth rather than a rotating circle. As we can see, this process is very primitive in nature and way less secure than a typical substitution cipher. History of cryptography The Caesar cipher is named for Julius Caesarwho used an alphabet with a left shift of three. Instead of randomizing our keys and reassigning them as values, the Caesar cipher simply rotates the alphabet to the right. There are usually similar functions that will work with two byte letters. This may be a holdover from an earlier time when Jewish people were not allowed to have mezuzot. Keywords shorter than the message e. Be warned though as this is only supported in Python 2. Also, at the end of alphabet you wrap around and replace: We are keeping this logic very simple so that we can understand the code. If you input the encrypted text, you should get decrypted text as the output. Now, when these three things plaintext, encryption algorithm and the key complete their individual work i. Application of the Caesar cipher does not change these letter frequencies, it merely shifts them along a bit for a shift of 1, the most frequent ciphertext letter becomes f. This was mostly due to the lack of educated people in the world. First we include the stdio.How to Write Caesar Cipher in C Program with Example Code. by Koscica Dusko. on August 7, The following is the output decrypted text for the above input in Caesar’s cipher. WLV LV D WHVW PHVVDJH. The decryption is reverse. 15 Examples To Master Linux Command Line History; Top 10 Open Source Bug Tracking System;. Let us learn how to write a program to encrypt and decrypt using caesar cipher in C programming. Here, we shall see two different ways of implement caesar cipher algorithm in C programming language. I've taken a code from here for a simple Caesar cipher, and I've modified it so that the user will define the cipher key. But the program crashes every time I tried to run it. #include. Caesar Cipher in C and C++ [Encryption & Decryption] Get program for caesar cipher in C and C++ for encryption and decryption. What is Caesar Cipher? Below I have shared program to implement caesar cipher in C and C++. Also Read. In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, History and usage. The Caesar cipher is named for One way to do this is to write out a snippet of the ciphertext in a table of all possible shifts. Caesar Cipher History. The Roman ruler Julius Caesar ( B.C. – 44 B.C.) used a very simple cipher for secret communication. locate it on the outer wheel and then write down the corresponding letter from the inner wheel, which in this case is B. The computer program that demonstrates the use of a Caesar substitution cipher displays.Download
OPCFW_CODE
This page is basically a "wish list", describing various add-on packages which could be useful for an accessible version of pmOS. Unless otherwise noted, all packages are free / open source software. Feel free to add your own favorites, following the existing format and order. BRLTTY is a background process (daemon) which provides access to the Linux/Unix console (when in text mode) for a blind person using a refreshable braille display. It drives the braille display and provides complete screen review functionality. Some speech capability has also been incorporated. Ports for several hardware architectures are listed for the Alpine Linux edge branch. edbrowse is a combination editor, browser, and mail client that is 100% text based. The interface is similar to /bin/ed, though there are many more features, such as editing multiple files simultaneously, and rendering HTML. This program was originally written for blind users, but many sighted users have taken advantage of the unique scripting capabilities of this program, which can be found nowhere else. A batch job, or cron job, can access web pages on the Internet, submit forms, and send email, with no human intervention whatsoever. edbrowse can also tap into databases through odbc. It was primarily written by Karl Dahlke. No ports are currently listed for Alpine Linux. However, it is known to work on both 32 and 64 bit ARM and has been packaged for (at least) Arch, Debian, and Void Linux. Fenrir is a modern, modular, flexible, and fast console screenreader. No ports are currently listed for Alpine Linux. (pronounced i-r-c-two or irk-two, and sometimes referred to as IRC client, second edition) is a free, open-source Unix IRC and ICB client written in C. Initially released in the late 1980s, it is the oldest IRC client still maintained. Lynx is a customizable text-based web browser for use on cursor-addressable character cell terminals. Orca is a free and open-source, flexible, extensible screen reader from the GNOME project for individuals who are blind or visually impaired. Using various combinations of speech synthesis and braille, Orca helps provide access to applications and toolkits that support the AT-SPI (e.g., the GNOME desktop, Mozilla Firefox/Thunderbird, OpenOffice.org/LibreOffice and GTK+, Qt and Java Swing/SWT applications). Ports for several hardware architectures are listed for the Alpine Linux v3.12 branch. w3m is a ... text-based web browser and terminal pager. It has support for tables, frames, SSL connections, color, and inline images on suitable terminals. Generally, it renders pages in a form as true to their original layout as possible. - Slint (based on Slackware) - Stormux (based on Arch Linux ARM)
OPCFW_CODE
In computer programming, there are various data structures available to store and manipulate data. Two of the most commonly used data structures are arrays and linked lists. Both of these data structures are used to store collections of elements, but they differ in their implementation and performance characteristics. In this article, we will explore the difference between arrays and linked lists. An array is a collection of elements of the same data type that are stored in contiguous memory locations. It is a linear data structure that can be accessed using an index, and the elements in an array can be accessed in any order. On the other hand, a linked list is a collection of elements called nodes, where each node contains a value and a pointer to the next node in the list. Linked lists can be singly linked, where each node points to the next node in the list, or doubly linked, where each node points to both the next and previous nodes in the list. Arrays are implemented using a contiguous block of memory, which means that all the elements in an array are stored next to each other. This makes it easy to access any element in the array using its index, which is an integer value that represents the position of the element in the array. Linked lists, on the other hand, are implemented using dynamic memory allocation. Each node in a linked list is allocated separately and may not be located in contiguous memory locations. This makes it more difficult to access a specific node in the list, as we have to traverse the list from the beginning to reach the node we want to access. Insertion and Deletion Insertion and deletion of elements in an array are relatively straightforward when the array has empty slots. Elements can be inserted or deleted at any position in the array using the index, and the remaining elements are shifted accordingly. However, if the array is full, then the entire array needs to be re-allocated to increase its size, which can be time-consuming. Linked lists, on the other hand, are much better suited for insertion and deletion operations. To insert a new node in a linked list, we simply create a new node, update the pointers of the neighboring nodes, and link the new node in the list. To delete a node from a linked list, we update the pointers of the neighboring nodes to bypass the node we want to delete. Accessing elements in an array is very fast, as we can access any element directly using its index. However, accessing elements in a linked list is slower than in an array, as we have to traverse the list from the beginning to reach the node we want to access. This makes arrays more suitable for applications where we need fast random access to elements, whereas linked lists are better suited for applications that require frequent insertion and deletion operations. Arrays are statically allocated, which means that their size is fixed at the time of creation. This means that arrays cannot be resized once they are created, and their size cannot be changed dynamically during program execution. This can be a significant limitation in some applications, as it requires allocating memory for the maximum expected size of the array. Linked lists, on the other hand, are dynamically allocated, which means that their size can be increased or decreased dynamically during program execution. This makes linked lists more flexible and suitable for applications where the size of the data structure needs to change frequently. Follow us on Twitter: Hacktube5 Follow us on Youtube: Hacktube5
OPCFW_CODE
Also some discussions on the topic, but nothing that is usable. Would appreciate anyone who can show me to a place to start, I tried looking into the module development in the wiki but it is overwhelming with a lot of outdated stuffs. Am here to test out my knowledge and am willing to learn new things for this ! @naveed1228 - I’ve just seen your thread here about lab modules. What is the functionality you are needing to cover with your lab module? Depending on what aspect of care you are supporting, the decision for what lab module to use or create will vary - or even if you decide to then use a lab information systems (LIS/LIMS) and work on interoperability between that and OpenMRS. Thanks @janflowers we require the same functionality as bahmni did with openElis.We want that the doctor/physician will place the order during consultation of patient.The lab attendant will receive the order at his/her terminal following things will happen: 1:Lab attendant will collect the sample of patient. 2:Lab attendant fill the results of lab-order . 3:Lab attendant will take action on basis of those results e.g complete, reject, abnormal. 4:The doctor will see the lab-result outcome on patient dashboard. We evaluate the openElis for this as well and talk with bahmni developer for this.We need this system in short time that’s why I’m looking for some module which is easy to integrate with openmrs provide basic workflow. Do you have any information about opensource LIMS which we can easily integrate with openMrs and implements the workflow for laboratory. I searched around and ended up with working around with openElis and bahmni for now, although I would prefer a change. It seems it is the only one way it has been used, except for kenya lab (which I did not manage to get it to working in the new version of openMRS) Yeah we also have the same restriction.We have setup the reference application2.I think it will not worked with reference application.I just asked have you try openmrs with openElis or you are using Bahmni distribution. I figured out the module by myself I’m getting understanding how to created test order etc.Currently I’m facing issue related to specimenSite and Unit how can I defined these concept at backend,So I defined the concept with name of SpecimenSite and add two to three concepts but nothing populated in dropdown. The basic workflow of the Common Lab Test module is: A Lab Test order is requested for a particular test type. This step involves collecting sample information if the ordered test requires any sample to process. (Lab Test sample can be “Rejected” if it is contaminated or for any other reason. Otherwise it is “Accepted” to be processed and after lab result its status is changed to “Processed”.) Lab results are entered. It is a dynamic module which follows (attribute_type and attribute) approach. This way a user can create Lab Test Types and their respective Attributes types whenever a new Lab Test is introduced. The issue which you’ve encountered is because the module requires some setting up of global properties before it can be used. This is required step so that the metadata information like Specimen Type, Site etc are available. I have responded in detail on this thread below.
OPCFW_CODE
Cybersecurity Lab Developer I/CAEC NCP (UEC) California State University, San Bernardino Job No: 527364 Work type: Auxiliary Categories: Excluded, Administrative, At-Will, Full Time, On-site (work in-person at business location) About University Enterprises Corporation at CSUSB (This is not a state position) University Enterprises Corporation at CSUSB ('UEC') supports the university's educational mission by providing quality services that complement the instructional program. The University depends upon UEC to provide services that cannot be supported with state funds. We're responsible for business enterprises on campus including, but not limited to, dining, bookstore, convenience store, and vending services. We also serve as the grantee for federal, state, and local funding for research and sponsored projects. Full-time (40 hours a week), temporary 'non-exempt' benefited position (with the possibility of a one-year renewable appointment based on the availability of work, availability of funds, and satisfactory job performance). Salary: $4,333.33 - $5,416.67 per month Location: San Bernardino Monday to Friday from 8:00 AM PT to 5:00 PM PT. The incumbent must be able to participate in remote or on-campus work depending on the directives from the CSUSB campus. First Review Deadline This position will remain open until filled. Typical Duties include, but are not limited to the following: Create and maintain hands-on cybersecurity labs and lab environments while working within a small agile team. Test and document hands-on cybersecurity labs and lab environments. Assist in the design, development, and maintenance of software tooling to develop hands-on cybersecurity labs and lab environments. Assist frontline support personnel in fielding support queries from various users regarding cybersecurity lab issues, including providing live troubleshooting and developing workarounds. Learn additional tools, technologies, software, programming languages, and methodologies as needed to continue performing to duties of this position. Mentor technical student assistants and junior developers with varying knowledge and skillsets. Important Term Definitions: Experience: Used in non-trivial work, school, or personal projects. Should require little to no on-the-job training and mentorship. Familiar: Used in a minor, trivial, or trial manner for a work, school, or personal project. Might require some on-the-job training and mentorship. Knowledge: Has read about and theoretically understands the general value and use of a subject. Might require on-the-job training and mentorship. High School Diploma or equivalent Two (2) years of experience using Linux One (1) year of experience in any programming/scripting language Familiarity with Windows Familiarity with core networking concepts (e.g., TCP/IP, IP Address, Subnet Mask, etc.) Familiarity with common computer security concepts (e.g., defense-in-depth, attack surface, encryption, hashing, etc.) Ability to communicate and collaborate effectively over online communications platforms (e.g., Slack, Microsoft Teams, Hipchat, etc.) Associates Degree (or above) in a related field or with college coursework in related subjects Experience with the Python programming language Experience with the Bash and/or Powershell scripting languages Familiarity with systems monitoring tools (e.g., Nagios, Icinga, etc.) Familiarity with common offensive security tool suites (e.g., Metasploit, Mitre Caldera, etc.) Familiarity with the Git SCM If available, please provide samples of code you have authored. This must be attributable to you, provided via public URLs such as GitHub accounts, GitLab accounts, open-source project contributions, and personal websites. Medical, Dental, Vision, Flex Cash option CalPERS Retirement and CalPers 457 Group Term Life/ Accidental Death & Dismemberment (AD&D) Holidays & Personal Holiday Vacation and Sick pay accruals Educational Assistance Benefit is based on availability of funding. Workers' Compensation, Unemployment Insurance, State Disability Insurance EQUAL OPPORTUNITY EMPLOYER University Enterprises Corporation at CSUSB is committed to a diverse workforce and affirmative action, and is an equal opportunity employer. UEC maintains and promotes a policy of non-discrimination and non-harassment on the basis of race, sex, gender, color, age, religion, national origin, ancestry, marital status, sexual orientation, physical or mental disability, pregnancy, medical condition, genetic characteristics, status as a disabled veteran, or disabled veteran of the Vietnam era. To view the UEC Affirmative Action Program, please contact UEC Human Resources at (909) 537-7589 Monday through Friday between the hours of 8:00am and 5:00pm. As an equal opportunity employer, University Enterprises Corporation at CSUSB (UEC) is committed to a diverse workforce. If you are a qualified individual with a disability or a disabled veteran, you have the right to request a reasonable accommodation if you are unable or limited in your ability to use or access UEC's career website as a result of your disability. You may request reasonable accommodations by calling UEC's Human Resources Manager at 909-537-7589. UEC is an EOE - Minority/Female/Disability/Veterans. This position will remain open until filled. This has been designated as a sensitive position. The selected candidate must successfully pass a thorough background investigation to include a criminal history check prior to appointment. California State University, San Bernardino offers a challenging and innovative academic environment. The university seeks to provide a supportive and welcoming social and physical setting where students, faculty and staff feel they belong and can excel. The university provides students the opportunity to engage in the life of the campus, interact with others of diverse backgrounds and cultures, as well as participate in activities that encourage growth, curiosity and scholarly fulfillment. Through its branch campus in Palm Desert, the university mission extends to the Coachella Valley.
OPCFW_CODE
Jedha Academy: Career advancement Want to become a data scientist? Our careers team will be happy to help you. At the end of the programme, you will receive a certificate in Data Science that you can add to your CV and LinkedIn profile. Added with this, the numerous Data Science projects that you get to realize during the bootcamp. Life access to classes and our alumni community Our students have access to our classes, alumni community, and events, even after they have finished the programme. It's a great opportunity to network with other professionals. Discover the best tools to analyze your web data. Learn to use Google Analytics and A/B testing through case studies. We often hear about this language, but what is it for? You will learn how to manipulate large databases and understand how to connect them to each other. Learn how to present your data in the best possible way to convince your audience. Machine Learning & Python Today, we produce more data than we are capable of processing. Learn how Machine Learning can provide a solution to these problems and construct powerful prediction models using Python and Spyder. Once you get the Fundamentals of programming, you will study the theoretical Fundamental of Data Science. The goal is for you to have a performing analytical mind. Big Data & Cloud Computing With that amount of Data generated, it remains important to deal with Big Data issues. You will get to use Spark, SQL and NoSQL database with Amazon Web Services tools. During this week, we will get to do a Linkedin resume review session. you will also prepare interviews for you to get a job right after the Bootcamp. The bootcamp will end with the production of Data Science projects, you will get to present those in front of recruiters. The program includes: - Week 1: Python Fundamentals - Week 2: Data visualization - Week 3: Supervised Machine Learning - Week 4: Boosting and time series - Week 5: Non-supervised Machine Learning - Week 6: Advanced Machine Learning - Week 7: API & Web Development - Week 8: Database management - Week 9: Big Data management with Spark - Week 10: Career coaching - Week 11 & 12: Production and presentation of Data Science projects For our alumni: - Life access to our community - Life access to course supporting materials - Free life access to our workshops Who is this course for? Data is everywhere. Whether you are in Finance, Sales or Marketing, you will need to be able to use data management tools to meet your business' needs. Can I become a Data Scientist after this course? It's entirely possible. This course will give you the fundamentals of Data Science that you can apply directly to the business. However, the road to Data Science is littered with obstacles and you will have to work hard to find a job in this sector. Are there pre-requisites for enrolling in this course? We work on the principle that our students know how to use Excel and have an understanding of maths to A level standard or equivalent. There will, of course, be reminders, but we focus on knowledge that is directly applicable in business. Do I need to know how to code? Will there be a final project I will have to submit? During the course, we will work on a Data Science challenge that you will present at the end of the course and which you will be able to show to any recruiters. Our Next Classes Full-time: October 29th - January 31st Monday to Friday from 9:30 am to 3:30 pm
OPCFW_CODE
Get some insights into the fascinating and exhausting world of integrating Windows™ scanners into the secureCodeBox. To date, Microsoft Windows is still the most popular operating system, especially in office or work related areas. Unsurprisingly, the majority of malware is also created for windows. While most of the scanners already implemented in the secureCodeBox can target and be run on any operating system, the need for Windows-specific security measures is blatant. There exist several security scanners that target specific Windows-related security aspects, such as Mandiant or PingCastle. PingCastle scans a domain with an Active Directory (AD), reporting any risks that can result from disproportionately privileged accounts or weak password policies for example. It is the first scanner that we went for integrating in the secureCodeBox, and what a journey it was! Join us on our path to automated Windows security, including a lot of inception, dirty workarounds and a sour taste of Wine... Integrating PingCastle into the secureCodeBox - First Attempts So here was our starting point: We already ran some successful scans of PingCastle against our own AD. So it would be nice to automate the scans and get informed if some critical issues arise. As this is the whole point of our secureCodeBox, we wanted to add PingCastle as a scanner and eventually provide the community (you) with a possibility to do the same. As all of our scanners run on Linux distributions to date, it would not be feasible to simply add a Windows Docker container to our Kubernetes cluster, as Linux and Windows Docker environments are not easily interchangeable. So the idea was simply to run PingCastle in a Linux container. Well, it didn't turn out to be that simple... As PingCastle is open source, our first attempt was to compile it ourselves with Mono or .NET for Linux. We tried it to no avail. After some talks with professional .NET developers, we decided that this approach will exceed both our time and knowledge capabilities. So the next idea was to run it with Wine. If this worked, we would have had a pretty stable solution that could probably be applied for a lot of Windows scanners. Unfortunately, PingCastle did start and execute in our Wine environment, but failed to execute any scans against our AD. After trying a lot of stuff with adding our computers to the domain and using VPN connections, we had to give up. Probably, PingCastle in the Wine environment does not have the required access to some DLLs needed for the scan or PingCastle itself is just a little picky as we will see later... However, maybe we will come back to Wine in the future for other Windows scanners. Starting the inception So we finally came up with a rather "brute-force" method: If PingCastle solely runs on Windows - why not put Windows into a Linux container? Virtual machines (VMs) have become a well-known tool to achieve stuff like this. After solving some problems setting it up, we could confirm that it actually worked to run a Windows VM in a Linux Docker Container! (Running on our Ubuntu main OS, providing the Virtual Box driver, so that the VM actually does not run in the container but rather on the host OS, the inception took off!) After that we prepared the Windows 10 virtual machine image by adding it to the domain, linking it to our VPN and finally installing PingCastle. We could confirm that the scans inside the VM ran properly, but surprisingly a major issue with the VPN arose. Of course, one has to connect to the VPN automatically on start-up in order to run the scans from outside the machine. It turned out, however, that PingCastle is indeed very picky. It always refused to work while the machine was connected automatically to the VPN (e.g. using rasdial). It would, however, perfectly do its job when being connected manually to the VPN! We tried a lot here, and you can read all about our dirty workaround to finally make it work in our related extensive "Tutorial". With this tutorial you should be able to reproduce our attempt and set up a working container that is actually capable to be integrated into the secureCodeBox. We already provide you with all other necessary files, especially the parser that automatically converts the PingCastle scan xml to our secureCodeBox findings format. Be aware, however, that the solution is not yet stable for production and that you could still face some major issues with it. For example, it is not yet clear to us how the container will behave when being deployed over a long period of time. Maybe the VM will shut down unexpectedly, and we all know and love the BSoD when Windows refuses to start normally. This, of course, would also hinder any automatic scans from being executed. That is why we are thankful for any comments, experience reports or even suggestions, how to improve our chosen setup. In addition, if you have any questions or face any issues, please also let us know!
OPCFW_CODE
import { log } from "@ledgerhq/logs"; import invariant from "invariant"; import { Subject, Observable } from "rxjs"; import { map, distinctUntilChanged } from "rxjs/operators"; import type { Core } from "./types"; const GC_DELAY = 1000; let core: Core | null | undefined; let corePromise: Promise<Core> | null | undefined; let libcoreJobsCounter = 0; let lastFlush: Promise<void> = Promise.resolve(); let flushTimeout: NodeJS.Timeout | null | number = null; const libcoreJobsCounterSubject: Subject<number> = new Subject(); export const libcoreJobBusy: Observable<boolean> = libcoreJobsCounterSubject.pipe( map((v) => v > 0), distinctUntilChanged() ); type AfterGCJob<R> = { job: (arg0: Core) => Promise<R>; resolve: (arg0: R) => void; }; const afterLibcoreFlushes: Array<AfterGCJob<any>> = []; function flush(c: Core) { log("libcore/access", "flush"); lastFlush = c .flush() .then(async () => { let item; while ((item = afterLibcoreFlushes.shift())) { item.resolve(await item.job(c)); } log("libcore/access", "flush end"); }) .catch((e) => { log("libcore/access", "flush error " + String(e)); console.error(e); }); } export async function afterLibcoreGC<R>( job: (core: Core) => Promise<R> ): Promise<R> { return new Promise((resolve) => { if (!core) return; log("libcore/access", "new after gc job"); afterLibcoreFlushes.push({ job, resolve, }); if (libcoreJobsCounter === 0) { log("libcore/access", "after gc job exec now"); if (flushTimeout) { clearTimeout(flushTimeout as number); } flushTimeout = setTimeout(flush.bind(null, core), GC_DELAY); } }); } export async function withLibcore<R>( job: (core: Core) => Promise<R> ): Promise<R> { libcoreJobsCounter++; libcoreJobsCounterSubject.next(libcoreJobsCounter); let c: Core | null | undefined; try { if (flushTimeout) { // there is a new job so we must not do the GC yet. clearTimeout(flushTimeout as number); flushTimeout = null; } c = await load(); await lastFlush; // wait previous flush before starting anything const res = await job(c); return res; } finally { libcoreJobsCounter--; libcoreJobsCounterSubject.next(libcoreJobsCounter); if (c && libcoreJobsCounter === 0) { flushTimeout = setTimeout(flush.bind(null, c), GC_DELAY); } } } type Fn<A extends Array<any>, R> = (...args: A) => Promise<R>; export const withLibcoreF = <A extends Array<any>, R>(job: (core: Core) => Fn<A, R>): Fn<A, R> => (...args) => withLibcore((c) => job(c)(...args)); let loadCoreImpl: (() => Promise<Core>) | null | undefined; // reset the libcore data export async function reset(): Promise<void> { log("libcore/access", "reset"); if (!core) return; invariant(libcoreJobsCounter === 0, "some libcore jobs are still running"); await core.getPoolInstance().freshResetAll(); core = null; corePromise = null; } async function load(): Promise<Core> { if (core) { return core; } if (!corePromise) { if (!loadCoreImpl) { console.warn("loadCore implementation is missing"); throw new Error("loadCoreImpl missing"); } log("libcore/access", "load core impl"); corePromise = loadCoreImpl(); } core = await corePromise; return core; } export function setLoadCoreImplementation(loadCore: () => Promise<Core>) { loadCoreImpl = loadCore; }
STACK_EDU
We're moving towards keeping just one messages file checked into version control that contains all messages for all front end and language combinations, and then split things up at build time. This should both help to keep languages in sync as well as reduce the amount of messages data each front end has to load. (ie, there's a lot of GTK- and RISC OS-specific stuff in there at the moment.) There is now netsurf/resources/FatMessages, which is processed by netsurf/utils/split-messages.pl, which can be called like this: ./utils/split-messages.pl en gtk < resources/FatMessages This will output to stdout all the messages that are relevent to the GTK front end in English. 'fr ro' for French RISC OS, 'nl ami' for Dutch Messages with a platform 'all' get emitted for all platforms. And here's the problem. I managed to automate the appropriate tagging of platform-specific messages for some things (such as the GTK messages, or RISC OS interactive help), but a lot of stuff isn't namespaced Can people please go through netsurf/resources/FatMessages and change the prefixes appropriately if they know a message is used only by a specific front end. For example, there's a lot of stuff there that looks like it's only used by the RISC OS front end for its menu construction, but I don't know if some other platform makes use of them. (I suppose we could always suck it and see.) I just noticed, one day after fixing the BeOS build, that r13573 broke Of course, since I use the top-level Makefile, it's wasn't even built nor installed, so it was missing the headers. I added it as r13574. Now I'll have to fix libdom itself, I guess I'll spend quite some time. I'll try to make a Haiku cross-compiler to make it easier for others to test builds, but it would have been nice to tell beforehand about that. For now it's complaining about "ANSI does not permit the keyword 'inline'", and of course not finding libxml2... I'm looking for ideas for my development team Is it possible to use css in objective c. The purpose is to build templates applications for iphone where we can easily modify the code to have changes done in the colors, etc. On Tue, Mar 13, 2012 at 11:29:47AM -0000, Robin Edwards wrote: > Dear Steve, My name is not Steve. But Steve's is. :) > We are intending to run a Show again this year, and so need to get an idea of > how many people/groups wish to have a Stand. > Here are the vital details:- > The 2012 Midland's RISC OS Show is in the same place as last year, St John's > Hall, Kenilworth. > The date is Saturday 7th July. The show opens at 11.00 and closes at 4.00. > Exhibitors can access the hall from 9.00. Tables cost £10 each. > Full up-to-date details are on the MUG website at:- > So that we can get some idea of numbers, would you please reply fairly soon > Yes - I will come > No - I will not be coming > Maybe - I am not able to decide yet > If the answer is yes, would you also say how many tables you might like. > Thanks! We look forward to seeing you there. As usual, the show is perilously close to my birthday, and I'd rather don't know if I'll be able to make it. I have CCed the developer list in case anybody else is available. > Please do not repeat your point any further, either of you. This > discussion is a waste of bits. rob: with respect, that is your opinion. and, with respect: i am compelled from experience to point out that your opinion tells me that you simply do not have any experience in the scale or scope of the task that you are facing. with respect: you are failing in your duty of care and responsibility to the netsurf community if you do not fully comprehend the various options out there, and have "dismissed them" for any undocumented and there are damn good reasons why Common Object Models are deployed for the purpose described, and you have not even _begun_ to describe any technical reasons - of any kind! - as to why you are dismissing the options being presented. should you wish to develop netsurf entirely "in secret" behind closed doors please feel free to do so but should you choose to do so please actually state on the web site "external technical input from experienced software developers is not welcome in this project". i trust that this is not the kind of image that the netsurf project wishes to present to the outside world. > NetSurf will almost certainly never rely > on a heavyweight tricky-to-port fat library such as any of the Glib > family, or anything else like that. If we need something as you > suggest, it will probably be our own, and not just because of NIH > syndrome, but because we have special requirements. 1) where may a list of the special requirements be found, such that i may review them and thus focus spending my personal free time and personal funds aiding and assisting the users of netsurf more 2) with respect: why did you not raise this earlier rather than letting us spend large amounts of time discuss matters which in your opinion are "a waste of time"? 3) to dismiss existing Common Object Models in general as "heavyweight" is pure foolishness, rob. and is disrespectful towards people who wish to aid and assist the netsurf community. 3) have you done a full technical evaluation of the amount of code that is likely to be generated by "rolling your own" Common Object Model or Code-Generator? 4) the discussion is not limited to gobject. as part of the discussion, a number of alternative COM-inspired technologies were found, many of which were designed with embedded systems in mind. have you evaluated those technologies? i actually *know* what you're facing, rob, to achieve the goal. it is simply - without fail an absolute undeniable 100% cast iron guaranteed inescapable fact that you are either a) *GOING* to have to use or write a Common Object Model system, or a Code-Generator. * a code-generator *will* add hundreds of thousands of lines of code to the netsurf project. this is an inescapable undeniable fact. * writing your own Common Object Model *will* take you something like an entire man-year to write and get right. the question is: is it _really_ worth it, and, once the task has been completed, are you *absolutely* sure that it will result in "fulfilling the special requirements" (*1), by virtue of you having done a full and comprehensive analysis of the execution speed, memory usage and binary object size? (*1 which you didn't actually list. or take the time to point anyone towards a document which describes them.) i have absolutely no doubt that should the netsurf team choose the route of "rolling their own", it will be one of the most superb free software COM systems in existence (i know the professional and technical experience of the netsurf developers is exceptionally high). should the netsurf team choose that route, i look forward to being able to take that excellent software (libre) licensed code so developed in order to utilise it in other projects, and i most certainly will evaluate it and advocate it right across the board in other forums which could benefit from the work done. i've just learned of the existence of netsurf, and am very excited to hear of it. i've just tried running it, and i'm ... surprised by its level of functionality. i say surprised because it's well below most peoples' radar, and at the same time is damn good! perplexing... anyway, the reason i'm writing is because, as the lead developer of the pyjamas and pyjamas-desktop project i need to be able to give pyjd users more (and easier to install) options. they're simply not c/c++ programmers: they're all python programmers. the windows users are well served by COM bindings to MSHTML (Trident) - ironically it's the free software developers that are suffering. so in 2008 i did the python bindings for webkit (two versions, one of which was based on gobject with followup auto-generated python bindings using python-gobject's codegen). pyjamas-desktop has been using both xulrunner and MSHTML successfully... for a given definition anyway, all of the free software options are deeply unsatisfactory, hence the reason why i was so excited to hear about netsurf. some clarification about what i would like to achieve, and what's needed: * what is NOT needed is "script language=python". pyjamas-desktop does NOT revolve around embedding of python *into* the web browser dot * i need to take netsurf-gtk and turn it into libnetsurf-gtk, followed then by turning it into python-libnetsurf-gtk * added to that, it must then be possible to gain access to the DOM functions (from python. all of them). in other words, the core drawing engine is embedded into a full-screen single-use window (no "URL bar", no menus, no back button, nothing) and then python is given access to the drawing engine's DOM handle. an example write-up of how it all works, in the xulrunner case, is anyway, i'm curious as to how far along the netsurf project is to being hackable in order to use it for python-embedded purposes like this. rather than swamp this list with something that may be utterly boring to most people i've written it up here: much of the experiences described in that draft document are based on having worked with IDL compilers and Common Object Model Technologies in samba, wine, webkit and firefox. the bottom line is that if you have the (ultimate) goal of adding it in such a way that other languages can play nice, too. thoughts greatly appreciated. >> * microsoft silverlight. this one *does* allow interaction with the DOM, >> and it has resulted in things like iron ruby and iron python gaining >> access to the DOM, and enabling "script language=xyz". unfortunately, it's >> IE only - i'd hate to rate anyone's chances of getting this to work under >> wine with Mono. > IE only? What's the Silverlight plugin in Firefox for then? jeremy, hi, thank you for prompting me to do some research. yeah, you're right: there's a moonlight plugin for linux users on firefox.
OPCFW_CODE