Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Domestic violence (DV) is a serious public health issue, with 1 in 3 women and 1 in 4 men experiencing some form of partner-related violence every year. Existing research has shown a strong association between alcohol use and DV at the individual level. Accordingly, alcohol use could also be a predictor for DV at the neighborhood level, helping identify the neighborhoods where DV is more likely to happen. However, it is difficult and costly to collect data that can represent neighborhood-level alcohol use especially for a large geographic area. In this study, we propose to derive information about the alcohol outlet visits of the residents of different neighborhoods from anonymized mobile phone location data, and investigate whether the derived visits can help better predict DV at the neighborhood level. We use mobile phone data from the company SafeGraph, which is freely available to researchers and which contains information about how people visit various points-of-interest including alcohol outlets. In such data, a visit to an alcohol outlet is identified based on the GPS point location of the mobile phone and the building footprint (a polygon) of the alcohol outlet. We present our method for deriving neighborhood-level alcohol outlet visits, and experiment with four different statistical and machine learning models to investigate the role of the derived visits in enhancing DV prediction based on an empirical dataset about DV in Chicago. Our results reveal the effectiveness of the derived alcohol outlets visits in helping identify neighborhoods that are more likely to suffer from DV, and can inform policies related to DV intervention and alcohol outlet licensing. As many U.S. states implemented stay-at-home orders beginning in March 2020, anecdotes reported a surge in alcohol sales, raising concerns about increased alcohol use and associated ills. The surveillance report from the U.S. National Institute on Alcohol Abuse and Alcoholism provides data about the monthly alcohol sales in a subset of states, allowing an investigation of this potential increase in alcohol use. Meanwhile, anonymized human mobility data released by companies such as SafeGraph enables an examination of the visiting behavior of people to various alcohol outlets such as bars and liquor stores. Leveraging these novel datasets, this study examines changes to alcohol sales and alcohol outlet visits during COVID-19 and their geographic differences in a subset of U.S. states. We find major increases in the sales of spirits and wine since March 2020, while the sales of beer decreased. We also find moderate increases in people’s visits to liquor stores, while their visits to bars and pubs substantially decreased. Noticing a significant correlation between alcohol sales and outlet visits, we use machine learning models to examine how that relation changed in the early months of COVID-19 and find evidence in some states for likely panic buying of spirits and wine. Large geographic differences exist across the examined states, with both major increases and decreases in alcohol sales and alcohol outlet visits. While a lot of challenges and uncertainty continued in 2021, everyone in the world has been working hard and playing their roles to keep things functioning and to help us get back to a life without virus. Let’s hope for a better new year with all promises! We wish you a Merry Christmas and a happy and healthy 2022! A common need for artificial intelligence models in the broader geoscience is to represent and encode various types of spatial data, such as points (e.g., points of interest), polylines (e.g., trajectories), polygons (e.g., administrative regions), graphs (e.g., transportation networks), or rasters (e.g., remote sensing images), in a hidden embedding space so that they can be readily incorporated into deep learning models. One fundamental step is to encode a single point location into an embedding space, such that this embedding is learning-friendly for downstream machine learning models such as support vector machines and neural networks. We call this process location encoding. However, there lacks a systematic review on the concept of location encoding, its potential applications, and key challenges that need to be addressed. This paper aims to fill this gap. We first provide a formal definition of location encoding, and discuss the necessity of location encoding for GeoAI research from a machine learning perspective. Next, we provide a comprehensive survey and discussion about the current landscape of location encoding research. We classify location encoding models into different categories based on their inputs and encoding methods, and compare them based on whether they are parametric, multi-scale, distance preserving, and direction aware. We demonstrate that existing location encoding models can be unified under a shared formulation framework. We also discuss the application of location encoding for different types of spatial data. Maps in the form of digital images are widely available in geoportals, Web pages, and other data sources. The metadata of map images, such as spatial extents and place names, are critical for their indexing and searching. However, many map images have either mismatched metadata or no metadata at all. Recent developments in deep learning offer new possibilities for enriching the metadata of map images via image-based information extraction. One major challenge of using deep learning models is that they often require large amounts of training data that have to be manually labeled. To address this challenge, this paper presents a deep learning approach with GIS-based data augmentation that can automatically generate labeled training map images from shapefiles using GIS operations. We utilize such an approach to enrich the metadata of map images by adding spatial extents and place names extracted from map images. We evaluate this GIS-based data augmentation approach by using it to train multiple deep learning models and testing them on two different datasets: a Web Map Service image dataset at the continental scale and an online map image dataset at the state scale. We then discuss the advantages and limitations of the proposed approach. The University at Buffalo Artificial Intelligence Institute was established in 2018 to explore ways to combine machines’ superior ability to ingest, connect and recall information with concepts that humans excel at, such as reasoning, judgement and strategizing, to develop dynamic human-machine partnerships. Its mission is to bring together educators and researchers in an interdisciplinary environment to continue to make significant breakthroughs in advancing the promise of machine or human-machine systems that can address complex cognitive tasks. Our GeoAI Lab was invited to join UB AI Institute as an affiliated lab. This affiliation will further strengthen the connections and collaborations between our lab and other UB units. The objective of this project is to understand how people describe locations on social media during natural disasters. These data from social media are potentially beneficial in disaster response efforts, and to further this goal, computational algorithms are being developed to extract location information from social media postings. However, uneven use of social media by different populations and varying ways of describing places can complicate the process of identifying locations. This project advances knowledge by enhancing the understanding of the ways in which people describe geographic locations during natural disasters, the effectiveness of different algorithmic approaches for location extraction, and the potential spatial biases in the described locations. Such knowledge benefits society by informing future disaster response practices to help save lives and reduce inequality in response efforts. This project provides interdisciplinary research experience for undergraduates and graduates and will enhance academia and industry partnership. The datasets and algorithmic tools produced from this project will be publicly shared. Social media platforms, such as Twitter, are increasingly being used by people impacted by natural disasters. Descriptions about the locations of victims and accidents are often contained in help-seeking messages posted on these platforms. However, a limited understanding exists of how locations are described on social media during natural disasters, which hinders their automatic extraction via computational tools. This project addresses three research questions: (1) What are the typical forms of location descriptions used by people on social media during natural disasters? (2) How effective are different geospatial artificial intelligence (GeoAI) approaches for extracting these location descriptions and representing them in geographic space? And (3) What spatial biases have characterized location descriptions on social media during natural disasters? The research team is collaborating with emergency management specialists to understand location descriptions on social media, examine multiple geo-knowledge-informed AI approaches for location extraction, and investigate the spatial biases of the extracted locations and their relation to vulnerable communities. The obtained knowledge about location descriptions and the developed methods can be applied to future disasters in diverse settings. We are seeking three UB undergraduate students with work-study support to contribute to two NASA-funded projects in collaboration with an interdisciplinary team of scientists and conservation professionals located in Buffalo, NY, Merced, CA, and Cape Town, South Africa. Scientific Illustrator – Are you interested in science but passionate about art? Create digital artwork that captures the incredible biodiversity of South Africa and the technology we are using to study it. Learn more and apply at https://app.joinhandshake.com/jobs/4834211. Geographic Data Visualization Analyst – Are you interested in learning how to tell stories with maps and building your online GIS portfolio? Join us to develop ‘story maps’ that share geospatial information, photographs, and details about the biodiversity hotspot and project. Learn more and apply at https://app.joinhandshake.com/jobs/4834301. GIS Analyst – Do you want to get involved under the hood of a NASA project using high-resolution imagery and artificial intelligence to help conserve biodiversity? Help us document ecological change by developing a geospatial dataset we’ll use for model training and prediction. Learn more and apply at https://app.joinhandshake.com/jobs/4838720. Important – these positions are only open to current UB students with work-study support for the 2021-2022 academic year. See here for more information about work-study positions at UB. We received the notification from NASA that our project proposal “Near-Real-Time Forecasting and Change Detection for a Fire-Prone Shrubland Ecosystem” was selected for funding support. This project aims to utilize statistical modeling and GeoAI methods for near-term ecological forecasting to predict natural land surface processes and evaluate near-real-time changes in the state of a hyperdiverse, fire-dependent and seasonally fluctuating open ecosystem: the fynbos of the Cape Floristic Region (CFR) of South Africa. Our research team consists of: • Dr. Adam M. Wilson, Principal Investigator, Wilson Lab, Department of Geography, University at Buffalo, State University of New York, United States • Dr. Yingjie Hu, Co-Investigator, GeoAI Lab, Department of Geography, University at Buffalo, State University of New York, United States • Dr. Glenn R. Moncrieff, Co-Investigator, Fynbos Node, South African Environmental Observation Network, South Africa • Dr. Jasper A. Slingsby, Co-Investigator, Fynbos Node, South African Environmental Observation Network, South Africa We will be hiring a Graduate Research Assistant (starting from Fall 2022 or earlier) and a post doc researcher (starting around Summer 2021), both of whom will be co-advised by Dr. Adam Wilson and Dr. Yingjie Hu. By participating in this project, the GRA and post doc researchers will develop expertise on GeoAI, raster data processing, biodiversity, and ecological forecasting. Interested candidates are encouraged to contact Dr. Hu.
OPCFW_CODE
from boxsdk import Client, OAuth2 import os import sys def ConfigObject(config_path): "read a configuration file to retrieve access token" configDict = {} with open(config_path,'r') as config: for line in config.readlines(): try: configDict[line.split("=")[0]] = line.split("=")[1].rstrip() except: pass return configDict def uploadZippedToBox(zippedFolder, boxfolder = None): if boxfolder is None: boxfolder = accessUploadFolder() try: items = boxfolder.get_items() for item in items: if item.name == os.path.basename(zippedFolder): try: item.delete() except Exception as e: print(e) return False boxfolder.upload(zippedFolder) uploaded = True except Exception as e: print(e) uploaded = False pass finally: return uploaded def accessUploadFolder(year=2020): # Define client ID, client secret, and developer token.path = os.path.join(*[os.path.dirname(os.path.abspath(__file__)),"instance"]) # Read app info from text file config = ConfigObject(os.path.join(*[os.path.dirname(os.path.abspath(__file__)),"instance", 'Boxapp.cfg'])) CLIENT_ID = config['client_id'] CLIENT_FOLDER = config['client_folder' + str(year)] ACCESS_TOKEN = config['access_token'] # Create OAuth2 object. auth = OAuth2(client_id=CLIENT_ID, client_secret='', access_token=ACCESS_TOKEN) # Create the authenticated client client = Client(auth) # make sure we connected try: my = client.user(user_id='me').get() print(my.name) # developer name tied to the token except: sys.exit("ERROR: Invalid access token; try re-generating an " "access token from the app console on the web.") tfolder = client.folder(CLIENT_FOLDER) # 2020 scada data folder return tfolder def listZipFiles(directory_folder): ''' Lists teh zip folders in teh directory folder, including subdirectortories ''' zipFiles = [] for root, dirs, files in os.walk(directory_folder): for name in files: if name[-3:] == 'zip': zipFiles.append(os.path.join(root, name)) return zipFiles def uploadAllZippedToBox(zipFolder): '''uploads new zip folders to box. Will not upload a zip folder if it already exists on Box even if the contents have changed''' #files to upload zipFiles = listZipFiles(zipFolder) tfolder = accessUploadFolder() items = tfolder.get_items() for item in items: if item.name in zipFiles: try: item.delete() #tfolder.file(file_id=item.id).delete() except Exception as e: print(e) #If we coudn't delete the existing zip file don't try to upload a new one. zipFiles.remove(item.name) uploadedFiles = [] badUploads = [] for zipped in zipFiles: try: uploadZippedToBox(zipped, tfolder) uploadedFiles.append((zipped,True)) except Exception as e: print(e) badUploads.append((zipped,False)) pass return uploadedFiles, badUploads
STACK_EDU
In the rare case you see pools of water around the enclosure, your ball python’s humidity is too high. Humidity much above 70% can quickly lead to a respiratory infection. To reduce the humidity, you should increase the tank’s ventilation.Apr 22, 2022 What Temperature And Humidity Does A Ball Python Need? One of the most difficult parts of owning a Ball Python is maintaining the temperature and humidity levels. You want the temperature gradient to be between 75°F and 95°F, while you want the humidity levels to be between 55% and 60%. What Humidity Is Too Low For Ball Python? In Short. Ball pythons require humidity levels of between 50 and 60%. When Is A Ball Python Full Grown A Ball Python will reach its full-grown size between two to three years old. Most pet Ball Pythons will grow at least three feet long. A male will typically top off at only 2.5 to 3.5 feet long, while a female can grow anywhere from 4 to 6 feet long. Both sexes are fully grown within three years. How Big Will My Ball Python Get? Captive ball pythons typically reach a length of 4 to 5 feet, although 6-ft wild specimens have been found. Hatchlings range from 10 to 17 inches (25.4 to 43.2 cm). Captive-raised ball pythons grow to more than 3 feet in length within 3 years. How Big Is A 2 Year Old Ball Python? #3: Larger ball pythons generally weigh more. Body construction is important when looking at ball python’s weight. This start being more noticeable as your ball python grows, after around 1-2 years of age. A ball python that is 4-5 feet will often reach around 2-2.5 kg. How To Make A Humidity Box For Ball Python How to Make and Use Humidity Boxes – YouTubewww.youtube.com › watch How Long Can A Ball Python Stay In A Humidity Box? Keeping humidity lower than 40% for more than a few weeks can be very harmful to a ball python, especially during shedding. You may notice some of the issues above. Do Ball Pythons Need A Humid Hide? If you use a glass tank, provide a humid hide–that is, a hide that has some damp moss in it for extra humidity. A shy ball python will feel more secure if there is a hide for them on the cool side and the warm side of the enclosure (more about that in the heating section). How Do You Make A Humid Enclosure? Regular Misting. The best way to help keep your cage at the proper humidity levels is to spray the cage once or twice a day with room temperature water. You can used a hand held spray bottle, or a pressure sprayer with a gentle mist. Lightly mist the entire enclosure, including the animal, substrate, and cage walls. How To Feed A Baby Ball Python In general, the youngest, smallest ones eat small frozen feeder mice or rats. Larger ball pythons typically eat larger mice or rats. Selecting prey for a ball python. As a general rule, you should select a rodent that is 1 to 1.25 times the size of the midsection of your snake.Jul 7, 2021 How Much Do You Feed A Baby Ball Python? Hatchling pythons grow very fast! Females can grow up to 12 inches within a year and males can grow eight inches per year. To keep up with their growth hatchlings need to eat a lot. They should be fed a hopper mouse every five days for the first four weeks of their life. When Should I First Feed My Ball Python? How often should a Ball python be fed?1As hatchlings, I feed every five to seven days for a good six or seven months. … 2Then I change it to once every seven to ten days, once the snake is at six or seven months old. … 3It’s also vital you are going up in prey size with the snake’s growth. How Long Does It Take For A Baby Ball Python To Eat? Your snake may be the best one to answer this question for you but typically an adult snake (over one year of age) will eat once every 10 to 14 days. Younger snakes should eat more often since they are still growing. They should eat at least once a week, or even once every 5 to 6 days while growing.Feb 24, 2022 Do Baby Ball Pythons Eat? Despite the fact that baby ball pythons eat nothing but animal protein, they ingest it in a similar way as adult ball pythons do. The most essential thing to note is that the prey must be the appropriate size. Pinky mice and fuzzy rats are the typical diet of baby ball pythons, as well as large crickets.Nov 14, 2021 How Do I Know If My Ball Python Is Stressed 14 Signs That Show That Your Snake Is StressedLoss Of Appetite.Weight Loss.Rubbing Their Nose Against Objects In Their Tank.Hissing.Striking.Attempting To Escape. Make Sure To Check The Following To Keep Your Snake From Escaping:Tail Rattling And Vibration.Regurgitation. How Do I Destress My Ball Python? How to Calm Down a Snake1Move Slowly. Quick movements can frighten snakes and send them into fight or flight mode. … 2Good Behavior. Snakes typically react to handling with fear or, if they are calm and relaxed, curiosity. … 3Guide, Don’t Restrain. … 4Cutting Your Losses. … 5Provide a Comforting Home. How Do I Know If My Ball Python Is Unhappy? Signs Your Snake Is Worried1Sudden Movements. Rather than the slow, almost lethargic, movements of a content snake, a worried one will make sudden movements and may not rest for long periods. … 2Submissive Posture. … 3Looking for Escape. … 4Hissing Noises. … 5Eating Disorders. … 6Tight Grip. … 7Striking. What Do Ball Pythons Do When Stressed? Ball pythons will wag their tail when breeding or feeding, but high arousal can also lead to stress. When you see tail wagging when you are not feeding and it’s not breeding season, then this shows a very stressed pet. How Do I Make Sure My Ball Python Is Happy? be a relatively large enclosure. maintain ambient daytime temperatures of 80-85°F (27-29°C). provide for a basking area of 90-92°F (32-33.3°C ). provide hide boxes. ● have access to fresh water in a bowl that is large enough for the snake to soak. How Long Does It Take A Ball Python To Eat This will typically take about five hours for a rat or two hours for a mouse. How Long Should It Take For Snake To Eat? Many things can affect the rate at which your snake digests prey. In the best of circumstances, a snake with access to suitably warm temperatures may digest a small mouse in two or three days. Conversely, a large python who consumes a deer may spend weeks digesting. How Long Does It Take For A Snake To Swallow Its Food? The warmer their bodies, the faster they digest their food. But it generally takes 3–5 days for food to be digested. Very large snakes such as the anaconda from South America eat rather large prey, so their digestion can take weeks. How Long Can A Ball Python Go Between Feedings? An adult ball python can survive up to 6 months without eating, but such a long period can be disastrous for the health of the reptile. When Should I Feed My Ball Python How often to feed a ball python. You don’t need to feed a ball python every day. Generally, smaller or younger ball pythons need to eatevery five days, while larger ones usually eat once every week or two. As they get older you feed them more at one time so they don’t need as many feedings.Jul 7, 2021 What Time Should I Feed My Ball Python? Ball pythons are happy to eat frozen-thawed prey, but snakes that have previously eaten live prey may take some time to adjust to dead prey. Ball pythons are nocturnal, so the best time for feeding is in the evening or just after you have turned out the lights. How Long Should I Wait To Feed My Ball Python? They should eat at least once a week, or even once every 5 to 6 days while growing. If your snake doesn’t want to eat weekly, it is okay to wait longer to feed him again the next time.Feb 24, 2022 How Do I Know If My Ball Python Is Hungry? Snakes will let you know when they’re hungry. They will start prowling their enclosure and their tongue flicks will increase in frequency and number. How Much Does A Piebald Ball Python Cost Piebald Ball Python Price: Prices at 7 Popular Online Storesuniquepetswiki.com › piebald-ball-python-price Are Piebald Ball Pythons Rare? The pied ball python is a unique color morph of the standard ball python. The pied color morph is extremely rare in the wild, but it is becoming increasingly popular in the pet trade.
OPCFW_CODE
|Date of birth||11 August 1999| I am a young autodidact programmer who could be qualified as a Jack of all trades : I went from creating video games to making robots with an Arduino, without forgetting developing my own programming language (in order to understand how it works under the hood). My favorite languages are Python and C++, but I am a fast learner and can adapt easily to any environment with enough time. Organized from the most mastered to the least one Linux, Windows, Arduino PyCharm, VisualStudio 2017, Atom IDE, CMake, Git, Tiled, Libre Office, Pygame, SFML, SDL, OpenGL, JQuery, Bootstrap, D3.js, ANTLR4, Elastic search 2018 Laureate of the Bourse Coddity 2017-today Student in PeiP (integrated preparatory class) at Polytech 2017 French baccalauréat, with honours (16.86 / 20) 2017 First Certificate of English, grade B (equivalent of GCSE) September 2018-today I am working on a 3D game using OpenGL, created in a Minecraft style. I learnt (and I am still learning) a lot about OpenGL and rendering technics, and voxels’ world optimizations. August 2018 I worked on a 3D rendering library on top of OpenGL, which I am using to create my own Minecraft clone. I learnt the basics of 3D rendering and the inherent problems such as chunk mesh optimization. May 2018 I created a small 16-bits operating system from scratch, to learn how it works under the hood. December 2017-today Working on a programming language inspired by Java™, Kafe, running on a VM. The most interesting parts are how to optimize the generated bytecode and how to design the interface of the virtual machine to be able to use it easily in video games. August 2016-today Working on a Pokémon® oriented video game project with a 3-person team (see it there) ; I discovered how to organize a project of a consequent size. May 2015-January 2017 Managed to create a Terraria® like project (UrWorld), even though it was very buggy, I learned a lot about game making. July 2018 Traineeship at the IRHT (department of the French CNRS). My job was to design websites to visualize data in different ways (you can see it here, there and here). I also worked on Python scripts to import/export data from/to the database used (Elastic search) at the IRHT. Another project I was given was a website to visualize different versions of an ancient text (encoded in XML TEI) easily. July 2016 Worked in a pharmacy as a technician, for a month. My job was to prepare medicines for a machine to pack them into kit. June 2015 Traineeship in Polytech Orléans for a week, where I helped an electronic technician to prepare electronic circuits for practicals. English : B2 level July 2015 School trip for a month to Australia. I was housed by a local family, who helped me discovering the culture of the country. Spanish : basic, B1 level 2013 School trip for a week in Barcelona, where I have been able to practice my Spanish. I am an autodidact : I learnt video game programming by myself, as well as the fundamentals of a programming language, running on a virtual machine, by making one. I also discovered the base of an operating system by making my own. I practised over 4 different sports (basket ball, French boxing, swimming, gymnastic), and learned over 10 different programming languages because I am a very curious person. I like learning new ways of thinking and solving problems. Whenever I have a new project idea, I am following it, just to see how I could implement this or that feature, behaviour or design pattern (my GitHub repositories can confirm it). As it is almost two years since I have been working on my Pokémon® oriented project, and about two years on a remake of Terraria® (first version was UrWorld, then Wilanda, and now UrPlanet), I think I can tell I am very invested in my projects.
OPCFW_CODE
Yesterday, Cedric Huesler, the Director of Project Management at Adobe started a Twitter thread on AEM as a Cloud Service. I have a number of burning questions about AEM as a Cloud Service, so I figured I'd take Cedric up on the offer to ask anything. Cedric graciously replied, hopefully you find these replies helpful as we all learn more about AEM as a Cloud Service: Q1. What's the plan for customers who do not want to run this in Adobe's tenant? A1. Today we are offering AEM as CS, MS and self-hosting. We have a single code base for all three. Q2. What is the roadmap for AEM "Classic"? A2. We will update SP schedule later this week. A2 (part 2). SP schedule updated with 2020 dates: https://helpx.adobe.com/experience-manager/maintenance-releases-roadmap.html @WimSymons - Not really an answer. Okay there will be extra SP's and CFP's, but the question is, will there any "classic" AEM major releases after 6.5 or will you put your money on AEM in the cloud only? A2 (part 3). For 2020, we keep updating the single code base for the 3 deliverables and don’t see a need for a major release (to avoid efforts on customer-side to move up - SP are easier to install). If you are not yet on 6.5 - please do that 1st Q3. What's the underlying persistence mechanism? S3+Mongo? A3. See Oak code-base - mongo and new segment blob store. Q4. Is this running on Azure? AWS? Will customers be able to choose? A4. It’s a mix - mostly azure today but can change anytime. Q5. Will customers be able to choose what AZ's to deploy their instances? Including China? A5. Yes, today you can choose regions. Q6. Will Customers be able to specify routing rules for requests based on the visitors origin? A6. Yes, CDN is bundled in. Q7. Can you temporarily skip / roll back releases if you encounter problems? A7. Yes and No, the release validation process does that - automated. Q8. How often will Adobe ship new releases and what will be the procedure to validate customer images? A8. Daily, validation process is quite evolved. Will link up details later Q9. What's the index storage mechanism? SOLR? A9. Today unchanged. Q10. Is there a plan / timeline to port the remaining AEM apps (Screens, Forms) to AEM as a Cloud Service? A10. Yes - this year.Huesler, Cedric (@keepthebyte). “1216857162405707777” 13 Jan 2020 Tweet. Do you have your own questions about AEM as a Cloud Service? Ask Cedric yourself! Additional Insights on AEM as a Cloud Service A few of the underlying technologies which Cedric references or are useful to read: 2019 AdaptTo seminar on the Sling Feature Model. WKND has been updated to be compatible with AEM as a Cloud Service.
OPCFW_CODE
Request API POST I'm having a problem with a Python request, where I pass the body url to the API data. Note: I have a Node.js project with TypeScript that works normally, prints to the screen and returns values. However if I try to make a request in Python it doesn't work with an error 401. Below is an example of how the request is made in Python. can you help me? import requests url = 'https://admins.exemple' bodyData = { 'login': 'admins', 'pass': 'admin', 'id': '26' } headers = {'Content-Type': 'application/json'} resp = requests.post(url, headers=headers, data=bodyData) data = resp.status_code print(data) It looks like you are not correctly authenticating with the backend. You need to figure out how to do that. Please dump a dict to a json string as follows: import json resp = requests.post(url, headers=headers, data=json.dumps(bodyData)) You also can pass your dict to json kwarg resp = requests.post(url, headers=headers, json=bodyData) It will set Content-Type: application/json and dump dict to json automatically "I have just checked it out" Did you do requests.get() or requests.post()? "If you need that header set and you don’t want to encode the dict yourself, you can also pass it directly using the json parameter" This is what I was thinking of...which looks like you addressed in your edit. I'm cleaning up the comments here. I'm still not convinced that this will solve the 401 response. We don't have enough information about how authentication and authorization works in the system the OP is using. Thanks, it worked! I used it this way. import json resp = requests.post(url, headers=headers, data=json.dumps(bodyData)) As the api returned the data in text (csv) and not in json I didn't get to use it. @WillistonSousaNunes As sudden_appearances says, you can replace data=json.dumps(bodyData)) with json=bodyData. it's returning the data on screen and presents a json decoder failure, as I understand it has some null fields in what the api returns and that's why this error. but if I have to do anything, the error continues or it doesn't bring the data, it's looking for a way. You are not correctly authenticating with the server. Usually, you need to send the username and password to a "sign in" route which will then return a token. Then you pass the token with other requests to get authorization. Since I don't know any details about your server and API, I can't provide any more details to help you out. As credentials are correct, because I have an automation that runs on top of this api however the same and nodejs, I need to do it in python due to the ease of integrating with powerbi
STACK_EXCHANGE
Memory issues while generating hips I'm generating HIPS over all S-PLUS DR4 dual photometry. The dataset has 160gb, composed by 1412 files. We use Ubuntu 22.04 40gb of RAM 24 CPU cores So If I run for a small fraction of the dataset, everything goes fine. But with the whole dataset I'm experiencing some memory issues leading to errors. I set Client(memory_limit="20GB") just to be sure. In the reducing step, this warning below is raised multiple times after ~15% progress. 2024-05-08 13:51:47,158 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the -os for more information. -- Unmanaged memory: 13.09 GiB -- Worker memory limit: 18.63 GiB I watched the htop while running it and the memory increases until it hit the max of the machine and then also it fills the swap. After this, it starts to give errors like: 2024-05-08 13:51:58,116 - distributed.worker - WARNING - Compute Failed Key: reduce_pixel_shards-7ba00d127f1b0b6d31357c24fc765d79 Function: reduce_pixel_shards args: () kwargs: {'cache_shard_path': '/storage2/splus/HIPS/catalogs/dr4/dual/intermediate', 'resume_path': '/storage2/splus/HIPS/catalogs /dr4/dual/intermediate', 'reducing_key': '2_72', 'destination_pixel_order': 2, 'destination_pixel_number': 72, 'destination_pixel_si ze': 160841, 'output_path': '/storage2/splus/HIPS/catalogs/dr4/dual', 'ra_column': 'RA', 'dec_column': 'DEC', 'sort_columns': 'ID', 'add_hipscat_index': True, 'use_schema_file': None, 'use_hipscat_index': False, 'storage_options': None} Exception: "FileNotFoundError('/storage2/splus/HIPS/catalogs/dr4/dual/intermediate/order_2/dir_0/pixel_72')" Investigating dask docs at https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os, It seems that a possible solution to linux is to manually free memory with: import ctypes def trim_memory() -> int: libc = ctypes.CDLL("libc.so.6") return libc.malloc_trim(0) client.run(trim_memory) The problem is that this seems to be a implementation to free the memory within the client instance in the main thread only. Any ideia on how to move on with this? This issue is directly related to #267 That's interesting! We have a notebook to estimate what your pixel_threshold should be, according to your data. You could see if the results from the notebook match your new value: https://hipscat-import.readthedocs.io/en/stable/notebooks/estimate_pixel_threshold.html I am closing this as it seems to be solved for now. Unmanaged memory issues continue to plague us...
GITHUB_ARCHIVE
Data Structures on Disk Drives The mechanisms of disk drive technology are only half of the story; tile other half is the way data is structured on the disk. There is no way to plan for optimal storage configurations without understanding how data is structured on the surface of disk drive platters. This section discusses the following data structures used in disk drives: Tracks, Sectors, and Cylinders Cylinders are the system of identical tracks on multiple platters within the drive. The multiple arms of a drive move together in lockstep, positioning the heads in the same relative location on all platters simultaneously. The complete system of cylinders, tracks, and sectors is shown in Figure 4-3. For instance, a system could have different partitions to reserve storage capacity for different users of the system or for different applications. A common reason for using multiple partitions is to store data for operating systems or file systems. Machines that are capable of running two different operating systems, such as Linux and Windows, could have their respective data on different disk partitions. Disk partitions are created as a contiguous collection of tracks and cylinders. Visually, you can imagine partitions looking like the concentric rings of an archery target with the bull's eye being replaced by the disk motor's spindle. Partitions are established starting at the outer edge of the platters and working toward the center. For instance, if a disk has three partitions, numbered 0, 1, and 2, partition 0 would be on the outside and partition 2 would be closest to the center. Logical Block Addressing With logical block addressing, the disk drive controller maintains the complete mapping of the location of all tracks, sectors, and blocks in the disk drive. There is no way for an external entity like an operating system or subsystem controller to know which sector its data is being placed in by the disk drive. At first glance this might seem risky letting a tiny chip in a disk drive be responsible for such an important function. But, in fact, it increases reliability by allowing the disk drive to remap sectors that have failed or might be headed in that direction. Considering the areal density and the microscopic nature of disk recording, there are always going to be bad sectors on any disk drive manufactured. Disk manufacturers compensate for this by reserving spare sectors for remapping other sectors that go bad. Because manufacturers anticipate the need for spare sectors, the physical capacity of a disk drive always exceeds the logical, usable capacity. Reserving spare sectors for remapping bad sectors is an important, reliability-boosting by-product of LBA technology. Disk drives can be manufactured with spare sectors placed throughout the platter's surface that minimize the performance hit of seeking to remapped sectors. Geometry of Disk Drives and Zoned-Bit Recording To take advantage of this geometry, disk drive designers developed zoned-bit recording, which places more sectors inside tracks as the radius increases. The general idea is to segment the drive into "sector/track density" zones, where the tracks within that zone all have the same number of sectors. The outermost zone, zone 0, has the most sectors per track, while the innermost zone has the fewest. Logical block addressing facilitates the use of zoned bit recording by allowing disk drive manufacturers to establish whatever zones they want to without worrying about the impact on host/subsystem controller logic and operations. As platters are never exchanged between disk drives, there is no need to worry about standardized zone configurations. Table 4-1 shows the zones for a hypothetical disk drive with 13 zones. The number of tracks in a zone indicates the relative physical area of the zone. Notice how the media transfer rates change as the zones move closer to the spindle. This is why the first partitions created on disk drives tend to have better performance characteristics than partitions that are located closer to the center of the drive. Table 4-1 Disk Drive Zones Disk Drive Specifications Mean Time Between Failures MTBF specifications help create expectations for how often disk drive failures will occur when there are many drives in an environment. Using the MTBF specification of 1.25 million hours (135 years), if you have 135 disk drives, you can expect to experience a drive failure once a year. In a storage network environment with a large number of disk drives—for instance, over 1000 drives it's easy to see that spare drives should be available because there will almost certainly be drive failures that need to be managed. This also underlines the importance of using disk device redundancy techniques, such as mirroring or RAID. Speed and Latency Related to rotation speed is a specification called rotational latency. After the drive's heads are located over the proper track in a disk drive platter, they must wait for the proper sector to pass underneath before the data transfer can be made. The time spent waiting for the right sector is called the rotational latency and is directly linked to the rotational speed of the disk drive. Essentially, rotational latency is given as the average amount of time to wait for any random 1/O operation and is calculated as the time it takes for a platter to complete a half-revolution. Rotational latencies are on the range of 2 to 6 milliseconds. This might not seem like a very long time. But it is very slow compared to processor and memory device speeds. Applications that tend to suffer from l/O bottlenecks such as transaction processing, data warehousing, and multimedia streaming require disk drives with high rotation speeds and sizable buffers. Table 4 2 shows the rotational latency for several common rotational speeds. Table 4-2 The Inverse Relationship Between Rotational Speed and Rotational Latency in Disk Drives Average Seek Time Transaction processing and other database applications that perform large numbers of random l/O operations in quick succession require disk drives with minimal seek times. Although it is possible to spread the workload over many drives, transaction application performance also depends significantly on the ability of an individual disk drive to process an I/O operation quickly. This translates into a combination of low seek times and high rotational speeds. Media Transfer Rate Sustained Transfer Rate That said, sustained transfer rates indicate optimal conditions that are difficult to approach with actual applications. There are other important variables such as the size of the average data object and the level of fragmentation in the file system. Nonetheless, sustained transfer rate is a pretty good indication of a drive's overall performance capabilities. Subscribe to our Newsletter
OPCFW_CODE
/* Copyright (c) Microsoft Corporation. Licensed under the MIT License. */ #include <math.h> #include "vt_debug.h" #include "vt_fc_read.h" #include "vt_fc_signature.h" static VT_UINT fc_signature_calculate_maximum_index(VT_UINT* raw_signature, VT_UINT sample_length) { VT_UINT index_max = 0; for (VT_UINT iter = 0; iter < sample_length; iter++) { if (raw_signature[iter] > raw_signature[index_max]) { index_max = iter; } } return index_max; } static VT_INT fc_signature_calculate_37index(VT_UINT* raw_signature, VT_UINT sample_length) { for (VT_UINT iter = 1; iter < sample_length; iter++) { if ((VT_FLOAT)raw_signature[iter] <= (VT_FLOAT)(0.37f * (VT_FLOAT)raw_signature[0])) { return iter; } } return abs_custom( (((VT_FLOAT)sample_length * -1.0f) / log((VT_FLOAT)raw_signature[sample_length - 1] / (VT_FLOAT)raw_signature[0])) + 1.0f); } static VT_FLOAT fc_signature_calculate_correlation_coefficient(VT_UINT* signature1, VT_UINT* signature2, VT_UINT sample_length) { VT_FLOAT sum_signature1 = 0; VT_FLOAT sum_signature2 = 0; VT_FLOAT sum_signature1_signature2 = 0; VT_FLOAT square_sum_signature1 = 0; VT_FLOAT square_sum_signature2 = 0; for (VT_UINT iter = 0; iter < sample_length; iter++) { // sum of elements of array signature1. sum_signature1 += signature1[iter]; // sum of elements of array signature2. sum_signature2 += signature2[iter]; // sum of signature1[i] * signature2[i]. sum_signature1_signature2 += (signature1[iter] * signature2[iter]); // sum of square of array elements. square_sum_signature1 += (signature1[iter] * signature1[iter]); square_sum_signature2 += (signature2[iter] * signature2[iter]); } // use formula for calculating correlation coefficient. VT_FLOAT corr = ((VT_FLOAT)sample_length * sum_signature1_signature2 - sum_signature1 * sum_signature2) / sqrtf(((VT_FLOAT)sample_length * square_sum_signature1 - sum_signature1 * sum_signature1) * ((VT_FLOAT)sample_length * square_sum_signature2 - sum_signature2 * sum_signature2)); return corr; } VT_UINT fc_signature_compute( VT_FALLCURVE_OBJECT* fc_object, VT_ULONG sampling_interval_us, VT_ULONG* falltime, VT_FLOAT* pearson_coeff) { VT_UINT raw_signature[VT_FC_SAMPLE_LENGTH] = {0}; VT_UINT sample_length = VT_FC_SAMPLE_LENGTH; VT_ULONG falltime_computed = 0; VT_FLOAT pearson_coeff_computed = 0; fc_adc_read(fc_object, raw_signature, sampling_interval_us, VT_FC_SAMPLE_LENGTH); VTLogDebug("FallCurve Raw: \r\n"); for (VT_UINT iter = 0; iter < sample_length; iter++) { VTLogDebugNoTag("%d, ", raw_signature[iter]); } VTLogDebugNoTag("\r\n"); // Find index of Maxima VT_UINT index_max = fc_signature_calculate_maximum_index(raw_signature, VT_FC_SAMPLE_LENGTH); // Delete data BEFORE the maxima sample_length = sample_length - index_max; for (VT_UINT iter = 0; iter < sample_length; iter++) { raw_signature[iter] = raw_signature[iter + index_max]; } // Find datapoint which reaches 37% of the starting value VT_INT index_37 = fc_signature_calculate_37index(raw_signature, sample_length); // fingerprint_length of this new fingerprint if (index_37 < sample_length) { sample_length = index_37 + 1; } // Calculate FallTime falltime_computed = (VT_ULONG)index_37 * sampling_interval_us; VTLogDebug("FallTime computed: %lu\r\n", falltime_computed); // Reconstruct exponential fall for the N points VT_UINT perfect_exponential_raw_siganture[VT_FC_SAMPLE_LENGTH]; for (VT_UINT iter = 0; iter < sample_length; iter++) { perfect_exponential_raw_siganture[iter] = round((VT_FLOAT)raw_signature[0] * (VT_FLOAT)exp(-1.0f * ((VT_FLOAT)iter / (VT_FLOAT)(sample_length - 1)))); } // Calculate pearson coefficient pearson_coeff_computed = fc_signature_calculate_correlation_coefficient(perfect_exponential_raw_siganture, raw_signature, sample_length); #if VT_LOG_LEVEL > 2 VT_INT32 decimal = pearson_coeff_computed; VT_FLOAT frac_float = pearson_coeff_computed - (VT_FLOAT)decimal; VT_INT32 frac = frac_float * 10000; #endif /* VT_LOG_LEVEL > 2 */ VTLogDebug("Pearson Coeff: %lu.%lu \r\n", decimal, frac); if (sample_length > VT_FC_MIN_FALLTIME_DATAPOINTS && pearson_coeff_computed > VT_FC_MIN_SHAPE_MATCH) { *falltime = falltime_computed; *pearson_coeff = pearson_coeff_computed; return VT_SUCCESS; } return VT_ERROR; }
STACK_EDU
A Liar's Autobiography: The Untrue Story of Monty Python's Graham Chapman John Cleese, Michael Palin, Terry Jones and Terry Gilliam pay tribute to their late Monty Python colleague Graham Chapman in this hilarious, 3-D animated adaptation of Chapman's brazenly fictionalized life story. (TIFF) - Stars:Mohan Agashe, Kasturi Banerjee, Roshni Chopra, Rajneesh Duggal, Monica, Adah Sharma, Natasha Sinha, Graham Chapman, John Cleese, Terry Jones, Michael Palin, Terry Gilliam, Carol Cleveland, Philip Bulcock, Stephen Fry, Rob Buckman, Jamielisa Jacquemin, Diana Kent, Lloyd Kaufman, Tom Hollander, Peter Dickson, Margarita Doyle, You may also like A Liar's Autobiography: The Untrue Story of Monty Python's Graham Chapman torrent reviews Bonnie M (nl) wrote: I enjoyed this film for a glimpse into life in India. The whole lunchbox delivery system is fascinating, and "studied by Harvard," as one of the delivery men say in a funny line. It dragged a bit in the middle for me and I found the ending very unsatisfying. I wasn't sure if the filmmakers didn't know how to end it without giving in to convention (meeting at the train, etc.) or if they tried to be too avant-garde but it left me frustrated. I'm used to French films that end with ambiguity but this was a disappointing end and didn't stir me like some of those French endings do. Amber G (ru) wrote: Let it shine is my favorite movie Shafayat R (mx) wrote: Soft, smooth, sad and dramatic. Again proves that indie films are much more better than commercial ones. Victor M (es) wrote: A well assembled story about the emptiness of the manners of the high British society between wars with a great cast. At the beginning you can feel a little confused with so many characters but in the end you know all of them. Chetan (jp) wrote: I think it is karishma's best acting Perfection Q (au) wrote: i liked it when i was little and its still kinda cute to me lol =] Timothy M (kr) wrote: One of the better swansongs for any filmmaker of longevity. Although the film is brief, Huston plays his cards close to the chest until the final 10 minutes or so, where all is made abundantly clear (not that it's a mystery as to what it's all about - it can be figured out rather quickly too, but Huston's playing a more subtle game, even by his standards). Terrific performances all 'round, but both Donals McCann and Donnelly are exceptional. Dustin M (nl) wrote: I used to be scared of this film but now it's not that bad. The animation of the creep is more interactive to the audience and less scary and more good story telling than the first one. Leena L (de) wrote: I liked this. Always was wondering what this legendary tv-series was about. I was surprised about the indecency, intrigue and scandal of it! No wonder this was popular..... now just hoping the series would show on tv :) Manny C (jp) wrote: Before he broke out with his two signature masterpieces Grand Illusion and Rules of The Game, Jean Renoir had this great feature to his name, a simple tale of a down on his luck tramp taken in by a sweet family. Available in a wonderful Criterion Edition DVD. David W (au) wrote: Stupid and unnecessary for a sequel, Rocky 5 outs the franchise to a screeching holt for it to be redeemed in the future
OPCFW_CODE
Need SQL JOIN statement assistance I am creating a report that needs to list both Primary Person Id and Alternate Person ID. But it also needs to show both Primary Person IDs contact information and Alternates. Report I've created right now only lists out Primary Person ID's contact information, but shows the Alternates ID number. Can someone assist me in fixing my sql so both Primary and Alternates contact information is listed and not just the Primary's. The sql I have is below. SELECT "ORG_ACCOUNT".ACCOUNT_NUMBER AS "Account Number", "ORG_PERSON".ADDRESS_2 AS "Address", "ORG_ACCOUNT".DODAAC AS "Dodaac", "ORG_DODAAC".DRA AS "Dra", "ORG_PERSON".EMAIL AS "Email", "ORG_PERSON".FIRST_NAME AS "First Name", "ORG_PERSON".LAST_NAME AS "Last Name", "ORG_PERSON".LAST_TRAIN_DATE AS "Last Train Date", "ORG_PERSON".MIDDLE_NAME AS "Middle Name", "ORG_ALT_ACCOUNT_CUST".PERSON_ID AS "Alt Person Id", "ORG_ORG".ORG_NAME AS "Org Name", "ORG_ACCOUNT".PERSON_ID AS "Person Id", "ORG_PERSON".PHONE_COM AS "Phone Com", "ORG_PERSON".PHONE_DSN AS "Phone Dsn", "ORG_PERSON".RANK AS "Rank" FROM "ORG"."ORG_ACCOUNT" "ORG_ACCOUNT", "ORG"."ORG_DODAAC" "ORG_DODAAC", "ORG"."ORG_ORG" "ORG_ORG", "ORG"."ORG_PERSON" "ORG_PERSON" "ORG"."ORG_ALT_ACCOUNT_CUST" "ORG_ALT_ACCOUNT_CUST" WHERE ( ( "ORG_PERSON".PERSON_ID(+) = ORG_ALT_ACCOUNT_CUST".PERSON_ID ) AND ( "ORG_ORG".ORG_ID = "ORG_ACCOUNT".ORG_ID ) AND ( "ORG_PERSON".PERSON_ID = "ORG_ACCOUNT".PERSON_ID ) AND ( "ORG_ALT_ACCOUNT_CUST".PERSON_ID = "ORG_ACCOUNT".PERSON_ID ) AND ( "ORG_DODAAC".DODAAC = "ORG_ACCOUNT".DODAAC ) ) AND ( UPPER("ORG_ACCOUNT".DODAAC) LIKE UPPER(:DODAAC) AND "ORG_DODAAC".DRA IN ( :P_DRA_ENTRIES) AND UPPER("ORG_ACCOUNT".DODAAC_COMMODITY) = UPPER('A') ) ORDER BY "ORG_DODAAC".DRA ASC, "ORG_ACCOUNT".ACCOUNT_NUMBER ASC, "ORG_PERSON".LAST_NAME ASC You sure you are using MySQL and not SQL server (MSSQL), Oracle or PostgreSQL? MySQL normally does not support double qoutes for identifiers like database name, table name and columns unless you configure the sql_mode ANSI_QUOTES .. I also advice you to read Why should I provide an MCVE for what seems to me to be a very simple SQL query? and provide text formatted example data and matching expected text formatted results (+) is Oracle syntax only i didn't notice it before i changed the tags from MySQL to Oracle.. the (+) can be used to use a LEFT or RIGHT join it's a extension by Oracle to support LEFT and RIGHT JOIN 's with the old ANSI comma JOIN syntax which you should not be using anymore. You should be using proper JOIN syntax like table1 INNER JOIN table ON ... or table1 RIGHT|LEFT JOIN table ON ... We are using oracle in our work place. When you want to join a table twice, like you do here with ORG_PERSON, you need to list it twice in the FROM clause (with different aliases). SELECT ORG_ACCOUNT.ACCOUNT_NUMBER AS "Account Number", ORG_PERSON.ADDRESS_2 AS "Address", ORG_ACCOUNT.DODAAC AS "Dodaac", ORG_DODAAC.DRA AS "Dra", ORG_PERSON.EMAIL AS "Email", ORG_PERSON.FIRST_NAME AS "First Name", ORG_PERSON.LAST_NAME AS "Last Name", ORG_PERSON.LAST_TRAIN_DATE AS "Last Train Date", ORG_PERSON.MIDDLE_NAME AS "Middle Name", ORG_ALT_ACCOUNT_CUST.PERSON_ID AS "Alt Person Id", ORG_ORG.ORG_NAME AS "Org Name", ORG_ACCOUNT.PERSON_ID AS "Person Id", ORG_PERSON.PHONE_COM AS "Phone Com", ORG_PERSON.PHONE_DSN AS "Phone Dsn", ORG_PERSON.RANK AS "Rank", alt_person.address_2 as "Alt Address", alt_person.email as "Alt Email", alt_person.first_name as "Alt First Name", alt_person.last_name as "Alt Last Name", alt_person.phone_com as "Alt Phone" FROM "ORG".ORG_ACCOUNT ORG_ACCOUNT, "ORG".ORG_DODAAC ORG_DODAAC, "ORG".ORG_ORG ORG_ORG, "ORG".ORG_PERSON ORG_PERSON "ORG".ORG_ALT_ACCOUNT_CUST ORG_ALT_ACCOUNT_CUST, "ORG".ORG_PERSON alt_person WHERE ( ( alt_person.PERSON_ID(+) = ORG_ALT_ACCOUNT_CUST.PERSON_ID ) AND ( ORG_ORG.ORG_ID = ORG_ACCOUNT.ORG_ID ) AND ( ORG_PERSON.PERSON_ID = ORG_ACCOUNT.PERSON_ID ) AND ( ORG_ALT_ACCOUNT_CUST.PERSON_ID = ORG_ACCOUNT.PERSON_ID ) AND ( ORG_DODAAC.DODAAC = ORG_ACCOUNT.DODAAC ) ) AND ( UPPER(ORG_ACCOUNT.DODAAC) LIKE UPPER(:DODAAC) AND ORG_DODAAC.DRA IN ( :P_DRA_ENTRIES) AND UPPER(ORG_ACCOUNT.DODAAC_COMMODITY) = UPPER('A') ) ORDER BY ORG_DODAAC.DRA ASC, ORG_ACCOUNT.ACCOUNT_NUMBER ASC, ORG_PERSON.LAST_NAME ASC Some style notes: I removed the double quotes from your table names and aliases because they're annoying and unnecessary. But I left your query in the old proprietary Oracle join syntax instead of ANSI joins, since I know a lot of workplaces still use it as an internal coding standard. I left my changes in lowercase so they'd be easy to see. Thank you so much. This worked. I will definitely take your advice. Again thank you so much for the help!
STACK_EXCHANGE
feature idea: use bazeldnf as sysroot for hermetic C/C++ toolchains Some C/C++ toolchains use sysroots for cross compilation: https://github.com/grailbio/bazel-toolchain It should be possible to use bazeldnf to provide the sysroot directly from RPMs. I tried this naively by specifying a tar2files rule that provides the shared objects I want: tar2files( name = "sysroot-files", files = { "": [ "usr/lib64/ld-linux-x86-64.so.2", "usr/lib64/libc.so.6", "usr/lib64/libpthread.so.0", "usr/lib64/libm.so.6", ], }, tar = ":sysroot", visibility = ["//visibility:public"], ) ... but this ended in a dependency cycle: ERROR: /home/malte/.cache/bazel/_bazel_malte/378578e4f433132b2cb209458a9b56bc/external/llvm_toolchain_with_sysroot/BUILD.bazel:148:10: in filegroup rule @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux: cycle in dependency graph: //3rdparty/bazel/com_github_google_go_tpm_tools/placeholder:ms_tpm_20_ref_disabled (3e8e717f4669c9418374bff8949496244fb3fd0c20f44265c211f83bca6cf692) @llvm_toolchain_with_sysroot//:cc-clang-x86_64-linux (3e8e717f4669c9418374bff8949496244fb3fd0c20f44265c211f83bca6cf692) .-> @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @llvm_toolchain_with_sysroot//:linker-components-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @llvm_toolchain_with_sysroot//:sysroot-components-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | //rpm:sysroot-files (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @bazeldnf//cmd:cmd (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @io_bazel_rules_go//:go_context_data (e93d1c23410f0ff85d23a7db057f8bc335ef39c02262590d5174b118f115d0b7) | @io_bazel_rules_go//:stdlib (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @io_bazel_rules_go//:cgo_context_data (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) | @llvm_toolchain_with_sysroot//:cc-clang-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) `-- @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4) I would like to work on this but lack some of the understanding of how this really works under the hood. I would really appreciate if anyone has ideas and maybe some pointers where to look. This is indeed something I am also very interested in. I could not find a clean solution myself so far. What we do in kubevirt to achive that is a little bit involved: There is a target wich builds and extracts the buildroot in a well known location Then this is mounted into the buildroot of kubevirt with options like this: $ cat .bazeldnf/sandbox.bazelrc build --sandbox_add_mount_pair=/home/rmohr/gerrit/kubevirt/.bazeldnf/sandbox/default/root/usr/:/usr/ build --sandbox_add_mount_pair=/home/rmohr/gerrit/kubevirt/.bazeldnf/sandbox/default/root/lib64:/lib64 build --sandbox_add_mount_pair=/home/rmohr/gerrit/kubevirt/.bazeldnf/sandbox/default/root/lib:/lib build --sandbox_add_mount_pair=/home/rmohr/gerrit/kubevirt/.bazeldnf/sandbox/default/root/bin:/bin build --incompatible_enable_cc_toolchain_resolution --platforms=//bazel/platforms:x86_64-none-linux-gnu Thanks for the idea. That is a neat workaround. Knowing that this is a wanted feature and that you also cannot get it to work within the scope of one build invocation makes me think that this either requires advanced rule magic to prevent the cycle or maybe even a change in bazel itself. I'll ask about it in the bazel slack and see if this is a known issue. Maybe it is possible once we have pre-built binaries (https://github.com/rmohr/bazeldnf/issues/40) and somehow tie that somehow into a toolchain configuration itself ...
GITHUB_ARCHIVE
Recently, I came through an odd issue. My SQL Agent job which performs SSAS Cube processing was stuck for a while. Usually, it completes in few minutes, but something strange happened and it was stuck for more than hour. After googling and trying to find the reasons behind this unusual behavior, I realized that most of advices recommended restarting Analysis Services on your server. I was quite disappointed that there is not something like “kill” in T-SQL, so I’ve decided to investigate further before restarting Analysis Services. Then, I found out that in fact there is an option to kill the processing of the cube, using XMLA (this is a special language for accessing data in analytical systems, but it is out of the scope of this blog post). First, we need to find ID of the connection which is causing problems. Open your instance of SSAS, select new query and type this in the query editor (it’s MDX syntax): SELECT * FROM $SYSTEM.DISCOVER_CONNECTIONS GO SELECT * FROM $SYSTEM.DISCOVER_SESSIONS GO SELECT * FROM $SYSTEM.DISCOVER_COMMANDS GO Find the IDs of the process that is causing problems, copy and paste them into the following XMLA code: <cancel xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300" xmlns:ddl400="http://schemas.microsoft.com/analysisservices/2012/engine/400" xmlns:ddl400_400="http://schemas.microsoft.com/analysisservices/2012/engine/400/400"> <connectionid>CONNECTION ID THAT NEEDS TO BE KILLED</connectionid> <sessionid>SESSION ID THAT NEEDS TO BE KILLED</sessionid> <spid>SESSION SPID THAT NEEDS TO BE KILLED</spid> <cancelassociated>true</cancelassociated> </cancel> Ant that’s it! You’ve just got rid of the stuck process. Last Updated on February 5, 2020 by Nikola
OPCFW_CODE
The default map type view in the MapBrowser is Vertical view, which lets you see Nearmap vertical aerial photography. Our vertical imagery has been orthorectified; that is, it has been transformed in such a way that each point on the map looks like it was photographed from directly above it. Orthorectification makes vertical aerial imagery suitable for use as a map and allows georeferencing. If you have switched to another map type, this is how you can get back to Vertical view: - New MapBrowser, select Vertical in the Compass list. - Classic MapBrowser: - US: in the Toolbar, click View -> Map, and then click PhotoMaps. - AU: Click Vertical in the view mode located in top right hand corner. This document includes the following sections: There are five/six available view modes, depending on the MapBrowser you are using: Vertical - This is the standard, fully orthorectified overhead aerial map. Panorama - Panorama imagery offers a 45-degree angle view of a location, when available. Terrain - This map type lets you view the terrain. It can be very helpful to switch Street Maps or Properties on while viewing the terrain map to make navigating the map easier. Roads - The Roads shows a view of the street maps only, with no aerial imagery visible. 3D - The 3D shows you the imagery. The most current survey we have captured will be the default map shown whenever you connect to the MapBrowser, so make sure you revisit the site regularly to view the latest imagery. First Imagery Dates We have captured imagery of Perth since November 2007. Sydney, Melbourne, Adelaide, and Brisbane images go back to late 2009. United States image captures began in 2014. No prior imagery is available. You can scroll back and forth between dates on which we have flown surveys using the timeline. How Much Area does a Pixel Cover? It varies. It is affected by how far you are zoomed in, but even at a single zoom level, the area covered by a pixel will vary. The MapBrowser displays maps in the same projection as most other web maps, EPSG:3857 (basically the same as EPSG:3875 and EPSG:900913). While this allows the world to be projected onto a rectangular map, it means that individual pixels will vary in the amount of actual Earth area they cover, depending heavily on distance from the Equator. The pixels themselves do not each represent a perfect square of the Earth's surface either. Using the Export and Save Imagery tool, you can request image downloads in projections in datums such as GDA94 and NAD83, which apply only to specific locations and more accurately match the real world. When you select one of these projections using the tool, the pixel size for the saved image will be shown in the Save PhotoMaps panel. Read More about Viewing Nearmap Imagery
OPCFW_CODE
It is pretty clear on how to create animation and let it play through code. Is there also an controller that I can use like in unity. This is definitely also a concern since I am not sure how to go about playing multiple animation clips at the same time, how to mask them and in what way. top of page Hi, Cuddle Tree. Enabling the Animation Controller means that it works as a Mecanim System. Instead of exporting the animation separately from AnyPortrait, you need to Bake the character animation to play with the Animator. In the Bake dialog, you can set the character to be played by Mecanim. (1) Select the Setting tab, and (2) activate the Mecanim setting. Check "Is Mecanim Animation" and set the location where Animation Clips to be controlled by Animator will be saved. Then, you have to perform additional procedures further. We recommend that you read the tutorial below for detailed instructions. (Please click the below link box.) Since AnyPortrait operates with its own animation system different from Unity, the manual provides a way to link AnyPortrait and other functions of Unity. You can get a lot of information from our homepage, and if you are having trouble making what you want, feel free to let us know! We are here to help you quickly. I would like to use Unity Animation Controller since I know it well but when hitting bake I see nothing that shows how to export the animation. Hi, Cuddle Tree! Thank you for writing this post. As you mentioned the Animation Controller, you seem to be using the "Mecanim System". By using the "Layer" function in Mecanim's Animator, you can play multiple Animation Clips at the same time. In the 7th and 8th demos, "Pirate Game", the player character was implemented to play "Run and Shoot at the same time" in this way. Please see the tutorial below for how to use multiple layers. The important point here is, as you asked, how to merge the two animations. For example, you can mix the "Run" animation with the "Shoot" animation to create "Run and Shoot at the same time." In this case, the "Run" animation should be played first and the "Shoot" animation should be overwritten over it, but if the Mask isn't correct the "Run" animation will not be preserved. The solution is simple! In a "Shoot" animation that plays as a higher layer, you should not register "Parts not to be overwritten" as "Timeline Layer". In the example above, in the "Shoot" animation, if you do not register the character's "Two Legs" as the Timeline Layer, the "Run" animation for the Two Legs will be preserved even after merging. This method is the same as Unity's animation mixing, so we haven't described it in detail. So we think you are worried about this. We apologize for not writing the tutorial in more detail on this part. Perhaps if you try this feature, you can easily see how it works. In addition, even if you are not using Mecanim, the basic workflow is the same. For the part not to be overwritten, make sure that the Timeline Layer is not registered in the animation of the upper layer, and then call the animation playback function as follows. Play("Run", 0); //0 is Layer. 0 is the default and can be omitted. Play("Shoot", 1); //1 is Layer Also, most of the other playback functions include parameters for the layer. Descriptions of other functions can be found in the links below. If this problem has not been solved by the above answers or if you need more clarification, please add a post!
OPCFW_CODE
Will Ben Sims 18 mins read Will Ben Sims 18 mins read The US is still the largest IT service provider country in the world. The country has the highest smartphone user base in the world. According to the latest data, on average, Americans every day check their handheld devices 262 times, and that boils down to more than once every five minutes. Now, apps obviously take up most of the mobile usage as another statistic mentions that more than 88% of smartphone time is spent basically on apps. On the other hand, among mobile app development services, the companies in the USA are still ranked the highest and are most sought after for their expertise, skills, and professional benchmarks. Why do companies still look forward to US IT companies and developers building their ambitious apps? What is the cost, and what are the hiring models for getting onboard app developers from the US? Let's try to find the answers to these questions. Why is the USA the Best Destination For App Development? As the world's predominant tech hub, the USA has ruled the development world for years. The impartial views of any app development consultant regarding the best destinations for picking up developer talents will largely cover the development firms of the US. What made the US so popular and sought after as the destination for app development? What are the key value propositions and incentives for hiring developers from the US? Let us explain below. Along with the world's biggest IT companies such as Google, Microsoft, Amazon, IBM, Oracle, SAP, Apple, and many others, the country has its biggest and most reputed IT companies. Every few years, new IT companies, ventures, and applications from the USA make breakthroughs in their respective industries. The country houses the world's most reputed IT schools and higher educational institutions for science, mathematics, and engineering degrees. Students from all over the world also attend these world-famous IT institutions. Naturally, the US never experiences a shortfall in the supply of great developer talents. The US takes the lead in establishing the benchmarks and norms for the global IT companies and IT services to follow. Most of the leading development companies in the US had pioneering roles in creating new development standards and industry benchmarks. Businesses can easily stay updated on the latest industry benchmarks and norms by hiring USA development companies. The US has been the place where all leading and major programming languages and skills over the years flourished and grown. When it comes to programming and new skill development, US companies have never been left behind in exploring new technologies, skills, and languages. Naturally, when you look for a mobile app development company in the USA to build your app, you are likely to get the best options across every different skill. The development labs in the US are far more well-equipped and superior than development companies in other parts of the world simply because of the country's predominant IT market and development scene. By hiring development companies from the US, you can easily get exposure to the world's best IT infrastructure. For IT outsourcing and for hiring developers from worldwide, the US has been the best destination so far. Whether in respect of federal or state policies, the country always offers the most accommodating environment and regulations for outsourcing and hiring global talents. The IT industry of the country truly embodies the core values of globalization. Because of the stronger currency, the US has never been a cheaper destination for outsourcing development projects. But over the years, because of fierce competition on the costing among the development companies from the East coast to the West, projects from all over the world can find app developers with competitive pricing to fulfill their development requirements. Another major reason for the global businesses to hire US development companies and developers is the ease of access to the tech-savvy Human Resource in every part of the country. The US population is regarded to have the highest smartphone penetration in the world, and most toddlers and junior students just grow up being tech savvy. Naturally, in regard to both the audience and developer talents, this tech-savvy environment helps a business deliver more mature and innovative solutions. How Much Does It Cost Hiring App Developer In the USA? Now that you are more or less convinced about the advantages of hiring app developers from the USA, you must be asking what is the actual cost of hiring app developers in the USA. There is no direct answer to this question simply because there are several variable factors. But we can just try to have a look at these variable factors so that you can figure out the development cost more precisely. The biggest factor that is going to determine the development cost is the complexity of the app project and its feature set. For your app project to start with a cost advantage at the initial stage, it is always advisable to release an elementary app with basic features and gradually make value additions through updates. Target platform & technology stack: Whether you want to build a single app that runs on both iOS and Android or whether you want to build two separate native apps can influence the development cost to a great extent. The choice of the platform and corresponding technology stack will impact the cost and time for development. Incremental or full development: If you build a simple app for the first release and build it further with value additions through subsequent updates, the cost can be broken down over time. On the other hand, if you want to build the complete app in one go, the initial cost can be higher, and the course corrections can drain a lot of resources. Hiring and engagement model: When hiring developers, the hiring and engagement model also plays a crucial role in the overall cost of development for an app project. This also depends on the type of project and developer engagement that suits your app project. The technology stack and tools: The modular and component-based development frameworks ensure a lower cost of development simply because you can get just access ready to access components without writing much code. The third-party APIs and tools can also increase your development cost significantly. Choice of mid-level or top-notch development companies: When you are hiring developers from mid-level development firms, the cost can be significantly lower. But if you opt for top-notch development companies in the US with global footprints, you may need to pay a higher price. What Hiring Models Do USA IT Firms Offer? Now, let's come to the final consideration for hiring app developers and development services in the US. You need to choose the right hiring and engagement model that suits your app project perfectly. Let us have a quick look at the most popular hiring models that the IT and development firms in the US offer. This is the most popular model for ambitious app and software projects that need the rigorous engagement of the developers until the project completion and beyond. Here below, we mention the highlights of the dedicated hiring model. If your app project has only a set of fixed development needs that are well defined with their corresponding tasks and responsibilities, a fixed cost model is more suitable for the project. The best thing about this model is keeping comprehensive control of the development cost. The hourly hiring model, also called Time & Material model, has particularly been popular with app projects that cannot meet all the project specifications at the start and cannot afford dedicated hiring costs. In the US, for hiring skilled developers, you can have the greatest exposure to developer talents as well as a variety of hiring and engagement models. If you research well, you can always find the most experienced and expert Developer teams as per your requirements and within your budget. We provide free consultation! Get all your questions answered and we’ll also draft the scope of work before you making any payments. "The app was unquestionably well-designed, but we had a few inputs in the navigation. Cerdonis team was on it right from the word go. I must congratulate them on their proactive approach to develop our app. Excellent work would recommend them. Cheers! " Expert business analyst
OPCFW_CODE
What to look for in a pair of sustainable shoes? Let's limit ourselves to "casual sport" shoes (e.g., everyday trainers, casual, etc.) . I have been doing some research online, essentially along two different lines: sustainability, using http://rankabrand.org/ animal-free materials, browsing through the several companies which trade "vegan" shoes online (in the UK). What would be the main sustainability considerations when buying a pair of shoes? Which materials or what part of shoe manufacturing or disposal have the largest environmental impact? It considerably depends on the type of shoes (running shoes or everyday shoes) and whether it is planned to repair them. Two experiences I made: (a) After some detailed research I found a manufacturer of running shoes who offers to replace the sole when it is worn down (++). (b) One pair of eco-shoes I brought were worn down after ten month. I talked to a shoemaker about these shoes and he told me that the quality of the used material was low and that the shoe were not made for lasting longer than a year (--). Main considerations: source of materials. Obviously more oil is a bad thing. This casts aspersions on any plastic too. Longevity. A shoe that lasts for 10 years is a 20 times the value of one that lasts for half a year. Cost of making. Cost is a measure of crystallized sweat. Energy. Labour. Cheaper is better. Cost != Retail Price, although often they are proportional. Locality Something that you make is better than something downtown. Downtown is better than next county. Next county is better than China. Turning that into pragmatic answers: Sandals made from old tires are a big win. The tires already exist, so making sandals isn't really an eco cost. Unless you walk for a living they are good for decades. I see these throughout central and south America. Price in Peru was about 20 sols-- about $2 Rope soled shoes. Don't last nearly as long. You can find them on Instructables if you want to make your own. Leather moccasins. As long as we eat cow, we may as well use the wrappers the bovines come in. This is also something you can do for yourself from the internet. Go barefoot. Not a full time answer, but your feet toughen up a lot, and you can get away with it when not at work most of the time. Buy used shoes at Salvation Army, Goodwill and other such stores. This is tough as someone else's shoes can be worn in patterns that cause you foot and knee grief. I disagree with your point about "Cost is a measure of crystalized sweat. Energy. Labour. Cheaper is better." Low prices for shoes and clothing usually mean they were created in sweatshops, sometimes even with child labor. These manufacturers often don't follow environmental regulations and some are involved in illegal dumping of chemicals. May not be the best measure. I think it's a usable measure within a culture. Note too that Cost != Price. Summary: choose natural materials such as hemp, jute, organic cotton, or bamboo, preferably from a manufacturer that uses vegetable-based dyes, is fair-trade certified, and is working on reducing its environmental impact. Try to find shoes that last long and only buy new when you really have to. The biggest environmental impact of shoes comes from the used materials and the manufacturing process. If you also value 'social sustainability' you should also factor in the working conditions of the people who create the shoes. Materials According to Nike materials make up around 60% of the environmental impact of its shoes. Especially leather, synthetic plastics, and certain hybrid materials have a big impact, but also cotton, dyes and glues. The big impact of leather comes from raising cattle and the chemicals typically used in the tanning process (chromium). Plastics and hybrid materials are usually made from oil, and cotton is grown using lots of water, fertilizer and pesticides. Dyes and glues contain chemicals that negatively affect the environment as well as the health of the workers that manufacture shoes. Vegan leather isn't necessarily a more sustainable choice because it can be made from PVC which is one of the worst types of plastic. Carbon footprint A typical pair of synthetic trainers generates 30lbs (13.6 kg) of emissions, equivalent to leaving a 100-watt bulb burning for a week (source) Most shoes are made in Asia, but surprisingly transportation is only a small part of a shoe's carbon footprint. Researchers from MIT did a life-cycle analysis of an ASICS synthetic sport shoe and found that transport on average only accounts for 3% of the shoe's carbon footprint, and at most adds up to 7% when shipping to Canada (about the farthest country from China where the shoe was made). Raw material extraction and processing accounted for about 29% of the total carbon footprint and the manufacturing process was responsible for 68%. The reason why manufacturing has such a large footprint is because a single shoe consists of many parts that require multiple processing steps for assembly. The machines in Asian factories that perform these steps typically run on electricity generated by coal. Labor conditions Developing countries like India and Bangladesh, and to a lesser extent also China, do not have strong worker safety or environmental protection standards. Sweatshops are common in these countries and there have been many reports about child labor. Workers are often exposed to toxic chemicals and left-over chemicals are often spilled or dumped. Recommendations There are a few blogs that list fair-trade and sustainable shoe manufacturers (e.g. this one and this one) but the downside is that it's difficult to compare between listed companies. Alternatively you could do some investigations yourself. You already mentioned Rankabrand in your question which can be very helpful. A similar website is EthicalConsumer.org which has a section on shoes with interesting background information, but the newest company reviews there are behind a paywall. If a shoe manufacturer is not listed anywhere you can use the Internet to find out more. Especially look for information if the manufacturer is: using or switching to environmentally friendly materials, using or switching to renewable energy in their factories, trying to reduce the number of processing steps in manufacturing. was involved in child labor or dumping scandals in the recent past. Good materials to look for are natural materials such as hemp, jute, organic cotton, bamboo, vegetable-based dyes and to a lesser extent recycled plastics. If you are really set on leather, go for vegetable-tanned leather or better yet Piñatex. Unfortunately it's hard to tell in advance what the quality is of a pair of shoes and how long they will last, otherwise you could take that into the equation as well. The least impact comes from not buying new shoes often and wearing your old ones as long as possible.
STACK_EXCHANGE
OWASP Developer Application Security Pledge NB: This page is a rough draft of an idea we are working on and should not be used yet OWASP recognizes that many software developers are doing the hard work to become capable of repeatably producing secure applications. These individuals deserve a way to promote the fact that they are doing the right things. We have created the "OWASP Personal Application Security Pledge" to recognize these individuals and set a goal for other individuals to strive for. There is much more that developers can do, but we believe that these are the most critical steps that all individuals should have in place. To participate in the OWASP Pledge, please identify yourself and confirm that you are meeting the practices. None of the information from the program will be shared other than aggregate information and metrics. Once you have taken the pledge, you can use the pledge LOGO to promote the fact that you are taking steps to produce secure software. OWASP does not verify compliance with your pledge, but will assist in notifying you of any issue related to failure to keep your pledge. Failure to respond in a timely manner to an issue will result in revocation of the privilege of using the OWASP logo. The OWASP Developer Application Security Pledge NB: Need to add verifiable items To demonstrate my commitment to designing, building, and testing applications that are trustworthy enough for my business and its customers, I hereby confirm that: - 1. I follow application security principles. - I understand the foundational principles of application security and interpret them for the environment and application I am building. - 2. I understand the threat model and verify security requirements and architecture before coding. - Before I start coding, I make sure that I understand who the threat agents are, what kinds of attacks are possible in my environment, and what functions and assets are potentially vulnerable. I ensure that the requirements and architecture specify countermeasures to address these risks before I start coding. - 3. I understand the common application security vulnerabilities. - I've had training in application security and understand how common application security issues such as those described in the OWASP Top Ten apply in my development environment. I keep abreast of developments in application security such as new attacks and vulnerabilities. - 4. I use standard security mechanisms and patterns. - I understand that security mechanisms are difficult to build correctly, and that the use of established, tested, and centralized security mechanisms is much more likely to result in a secure system than attempting to build custom security mechanisms. - 5. I perform security testing and code review for common application security vulnerabilities. - I review my application's code for potential vulnerabilities, using both manual review and tools. I use proxy tools to expose my code to attacks not possible with the standard user interface, and I use fuzzing tools to expose my code to input that could make it misbehave.
OPCFW_CODE
Note that this algorithm usually takes into account the posture on the quantities only when swapping, so repeated quantities will likely not affect it. Otherwise, In the event your current project's Python resource data files are in "workspaceName/projectName/src/", you'd probably find "Develop 'src' folder and add it into the PYTHONPATH". Wild caught (WC) ball pythons are to some degree notorious for refusing to try to eat. Additionally they are generally really stressed from capture and transportation and fairly often have a significant parasite infestation that you do not need within your ball python! Captive bred (CB) ball pythons usually do not typically have these difficulties and tend to be just a little dearer to invest in, but They can be properly worth it! Python Normal Expressions – Serious Globe Used Python by Chandra Lingam will teach you sample matching skills for log mining, huge info parsing, cleanup and preparing with regex in Python. You will be able to confidently use frequent expression as a strong text processing Instrument for facts parsing, cleanup and planning. This Python Regex tutorial will educate you the way to work with common expression as a powerful text processing tool. Meta Stack Overflow your communities Join or log in to customise your record. much more stack exchange communities corporation site Even though offering decision in coding methodology, the Python philosophy rejects exuberant syntax (such as that of Perl) in favor of a simpler, less-cluttered grammar. As Alex Martelli put it: "To explain something as 'intelligent' isn't regarded as a compliment from the Python lifestyle. besides that if x is an expression, it is evaluated only once. The real difference is significant if assessing the expression has Unwanted side effects. This shorthand form is typically generally known as the Elvis operator in other languages. Be aware that Java, in the manner much like C#, only evaluates the utilized expression and will likely not Examine the unused expression.[eight] The import assertion, which is utilized to import modules whose functions or variables can be utilized in The existing plan.There are 2 means of using import . from import * or import . There are lots of situations any time a university student fails to glow just as they don’t have proper steerage. We would like to make certain they by no means need to deal with any trouble like this and that's why myhomeworkhelp.com give them consistent help around the clock. How can this code be fastened so the perform finishes thoroughly and properly types a listing of any (affordable) sizing? These operators Examine the values why not try this out on both sides of these and judge the relation among them. They're also called Relational operators. What about changing a listing to some tuple? Is there a purpose for that? Wherever would you go to understand?
OPCFW_CODE
What happened to the Ju 52 (“Aunt Ju“)? I remember when I was a child, my parents and me often saw the Aunt Ju flying. But about 5 years later it disappeared. We never saw it again. Does anybody know what has been the use of the Ju 52 after the 70s? During my military service, I have flown in that plane many times... In the early 70s. Today, most surviving units are in museums, but there are a couple of Ju52s still flying sightseeing trips, one of them in Frankfurt, if I remember correctly. Taste is a personal habit, I love the 747. But I will never understand why so many people love the JU52, it's corrugated surface is really awkward (ugly). @Peter Probably because it's a trimotor. That's a really unusual setup, there aren't very many models that used it. Haha. Good, but the definitive answer to three engines is the Lockheed L-1011 (Lucky Ten Eleven) :) At least for Germany I can assure it still flies! I live in Germany near Frankfurt and Egelsbach. Frankfurt is probably known, Egelsbach is 20 km south and has a quite large "airport" for private aviation (I don't know the proper term. There are no scheduled flights.) During summer Aunt Ju can be seen (and heard!) here every few weeks. You can book sightseeing flights (e.g. here or here) or one-way flights. The two Ju52's usually seen here are the "D-AQUI" and the "HB-HOY". They tour throughout Germany until autumn and are then revised until next spring. An excerpt of the next year's schedule for the HB-HOY shows three sightseeing flights around Egelsbach and Frankfurt, 40 minutes for ~280 EUR (~330 USD) and a one-way flight to Essen (60 mins for 250 EUR). During the last years the D-AQUI also hopped flew from Egelsbach to Frankfurt (that's just 10 minutes). They say the next year's flight plan for the D-AQUI will be released in January 2018. I'm sorry that most of the links are in German. Update (Aug 5 2018): Sadly, just yesterday (Aug 4 2018) one of the Aunts, HB-HOT, crashed in Switzerland. All twenty people aboard died. They don't know the reason yet. Update (Jan 27 2019): Again, sad news: The Lufthansa (which sponsors the "D-AQUI") has decided to stop their sponsorship for the maintenance. They say it's just too expensive for them to keep their Aunt Ju airborne for passenger flights. So "D-AQUI" will not be seen or heard any longer regularly. Unfortunately I have only German reference for this on www.spiegel.de (a major German news magazine). If you want the "real" Ju, only the Swiss planes will do. Lufthansa changed the D-AQUI beyond recognition: Different engines (BIG difference in sound!), different tires, different control system (don't look into the cockpit; will make you sick!). The list goes on; all in the name of safety and easy part replacement. The Swiss, on the other hand, kept their planes in pristine condition and they run like, well, like Swiss clockworks. @PeterKämpf Very interesting. I was aware that Lufthansa modified the D-AQUI but not to this extent. Nevertheless I personally prefer seeing the D-AQUI because the Swiss models are often painted with ads which makes them – to me – look like flying suitcases. I must admit though, it's a perfect match. Back in the Eighties I had a longer conversation with one of the technicians who did the restauration. He did not hide his disgust at the way how Lufthansa handled the work. The pedal unit on the left side is straight from a 737. The pushrods in the original control system had adjustable rod ends, so play and ease of movement could be fine-tuned. He was quite enthusiastic about their quality, but they were all thrown out and replaced by standard aviation-certified parts of much lower quality. And so on. Only the outside is kept in original shape (apart from the number of blades, of course). Concerning those ads: The person who inherited the Rimowa factory put his fortune into recreating a Junkers F 13. I think he deserves to have his brand prominently displayed on a Ju 52. According to Wikipedia there are still 8 JU-52's flying today. Which one do you remember seeing flying? List of airworthy Ju 52s The JU-52 trimotor is approaching 90 years old. In its time it was a slightly larger version of the Ford Trimotor. Corrosion and age are grounding the remaining airframes (Central Europe is no Arizona). The corrugated skin worked very well to improve skin strength. The cruise speed of around 100 knots did not present excessive drag penalties. The JU-52 carried a crew of 2 and around 17 passengers. It's innovative "double wing" slotted trailing edge gave good low speed performance, much like Fowler flaps today. This type generated some interest when Siemens came out with their electric aircraft motors, with the possibility of mounting the electric in the nose for takeoff performance and using the gas motors to maintain charge. Maybe they will build some new ones this century.
STACK_EXCHANGE
Make powershell.exe available at build time After flipping --experimental_strict_action_env, we need to pass --action_env=PATH to make sure some specific tools are available on Windows. Fixing https://buildkite.com/bazel/bazel-at-head-plus-disabled/builds/4#a868eb9b-dc52-4ce1-8adb-0da82e8a407b /cc @buchgr Another though, maybe it's better to add --action_env in the bazelrc file directly? Hmm. Is powershell always available in the same place on windows? Would something like --action_env=PATH=<C:\path\to\powershell.exe work? I think passing in the whole path is typically not recommended, because it's not compatible with remote caching / execution. On my Windows workstation: pcloudy@PCLOUDY0-W C:\tools\msys64\home\pcloudy > where powershell.exe C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Looks like it depends on the powershell version, so it's not always the same. Also, currently we set PATH as msys bin path when --experimental_strict_action_env, this make bash tools available, will --action_env=PATH=<C:\path\to\powershell.exe> override it? Yes, I think that would be very useful. But for now, we still need this change to fix the Windows build, right? /cc @filipesilva seems related to the problem of yarn PATH causing non-incremental builds, WDYT? @meteorcloudy could you send this as a CL to third_party/bazel_rules/rules_typescript? see the README.google.md in that directory if you'd like to use copybara to import this PR Powershell is not guaranteed to be on any given machine though. The windows nanoserver docker image does not contain it, for instance. It also does not support msys2 anyway, thus also does not support Bazel, but it's an example. Is powershell a requirement now for Bazel? @alexeagle the error in https://buildkite.com/bazel/bazel-at-head-plus-disabled/builds/4#a868eb9b-dc52-4ce1-8adb-0da82e8a407b doesn't seem related to the yarn path issue (https://github.com/angular/angular/issues/27514). This is just a missing binary (powershell), whereas the other one was the env path changing on every run.. But I agree that perhaps the --action_env=PATH flag can help work around it. @filipesilva powershell is not a requirement for Bazel, but it should be a native tool on Windows, which means users usually don't have to install it. But I agree we should avoid it if possible, so that we don't have to use --action_env=PATH to work around. @alexeagle Yes, they'll need to add --action_env=PATH for building on Windows. So, do you think it's better to add --action_env=PATH to the .bazelrc file? If so, users don't have to do anything, but the downside is this will also affect other platforms. I really want to echo what @achew22 asked: Why is this change necessary on a per project basis instead of being global? Can you also help me understand why isn't powershell treated like bash and made available without any flags? Is a change like this coming for Mac/Linux soon? It's a bit surprising, and onerous, that users need to effect this change to build on windows but shouldn't add it to build on other platforms. Can't a non-powershell script be used instead of requiring this change downstream on all projects? merged in d53748ce14074543b265803c1b7a2eaea4f03271
GITHUB_ARCHIVE
This guide is for users of [Supported Operating Systems](🔗) without direct access to the internet and is geared towards a base install. Please read through the [Simple Automated Installation](🔗) guide for prerequisites and general information before embarking on this guide. You will need to either: set up a network proxy to enable access to the public repositories, or take a copy of our public Opsview Monitor repository and transfer it to a server on your target network, to create a repository mirror _Note:_ The repository is not browsable; a mirroring process is the best way to download all of the packages Setting up the mirror is outside the scope of this documentation, but we suggest using a tool to achieve this such as [reposync](🔗) for CentOS and RHEL, or [apt-mirror](🔗) for Debian and Ubuntu. The base URLs for mirroring our repositories are: Please note, additional OS packages may be installed during this process, so you may also need to provide access to mirrored or local OS repositories, too. ### Offline Repo Setup Example This is a simple example of setting up a local repository The setup of the repo assumes that the server pulling the repository files is on the same OS/Arch as the Opsview install target server. Create local area to store Opsview packages Inform the Package manager of the Opsview files Add a repo source file Refresh the package manager cache You will also need to take a copy of the installation script (and verify its checksum) before transferring this to the target server: Ensure the returned string matches the following: Note: All of these steps should be run as the ` Using the Opsview Monitor deploy script downloaded earlier, install the initial package set. Specify a suitable ` <password>` to use with the Opsview Monitor ` admin` account here. This will make use of both the mirrored OS repository and the mirrored Opsview repository you have previously set up and configured. Amend the file ` /opt/opsview/deploy/etc/opsview_deploy.yml` and ensure all hostnames detected have the domain specified on them, for example, assuming the hostname is ` Note: change ` example.com` to match your own domain Amend the file ` /opt/opsview/deploy/etc/user_vars.yml` and add in the following appropriate line for your OS to specify the URL to your local Opsview Monitor package repository mirror: If you intend to use optional modules, you can enable them by adding lines as follows (note: these will still need an appropriate license to enable them) in the same file: Continue the installation using the following command: Note: If a more advanced setup is required then take a look the [Advanced Automated Installation](🔗) documentation first. Run the post-install configuration step: Once successfully installed, perform a manual activation by following the steps on [Managing your Subscription](🔗) After activation is successful, run: This will check your newly activated license and ensure all appropriate modules are installed and configured for your use. ## Logging in During the installation, a single administrative user will have been created. The credentials for this user are: After the system is activated, carry out a reload by navigating to 'Configuration => [System] => Apply Changes' and pressing the ` Apply Changes` button.
OPCFW_CODE
- 1 Can you write games in Java? - 2 How do you write a text based adventure game? - 3 How do you make an interactive fiction game? - 4 Should I make a game in Java or C++? - 5 Is C++ harder than Java? - 6 What is a text adventure game? - 7 How do you make a text game in notepad? - 8 What are the characteristics of a good text based adventure game? - 9 How do you write interactive storytelling? - 10 How do you make a story based game? - 11 How do you make a game like choices? - 12 What is faster go or Java? - 13 Is Java too slow? - 14 Why is Java bad for games? Can you write games in Java? In comparison to programming languages like C++, Java is easier to write, debug, learn and compile. If you are looking into Java game programming for beginners, you ‘ll need to understand the basics of coding with this language first. The average salary for a game developer is $65,000 but that could go up to $103k/year. How do you write a text based adventure game? 13 Tips For Writing a Good Text Adventure Game - 1 – Play it. - 2 – Start small. - 3 – Define a scope for your adventure. - 4 – Describe the settings and directions in a clear and specific way. - 5 – The text commands need to be instinctive. - 6 – Be sure to add a tutorial or “help” button in the game. - 7 – Write special events or “cutscenes” in an interesting way. How do you make an interactive fiction game? 5 Steps to Create Interactive Story Games with Twine - Plan your game. Firstly, you need a concept. - Write your game. Firstly, you have to download Twine. - Link your Passages. Linking up your game is easy. - Add multimedia. You can spice up your game by adding pictures or changing the style of the text. - Publish Your Game. Should I make a game in Java or C++? Is C++ harder than Java? It is harder, as it more complex and a lot more hard to learn. Actually, it’s complexity makes Java a lot more easier to perceive. C++ complexity also makes it a lot more vulnerable to bugs and errors that are hard to be detected, unless you use one of those programs, such as checkmarx, that helps with it. What is a text adventure game? Text adventures (sometimes synonymously referred to as interactive fiction ) are text -based games wherein worlds are described in the narrative and the player submits typically simple commands to interact with the worlds. How do you make a text game in notepad? Making a Game in Notepad and Much Much More - Step 1: Introduction to Batch. Batch is a language that runs primarily out of your Windows command prompt. - Step 2: Cls, Exit, Title, and Color. - Step 3: Goto. - Step 4: Set/p and If. - Step 5: Ping Localhost -n 7 >nul. - Step 6: %random% - Step 7: Text to Speech Converter. - Step 8: Star Wars!!! What are the characteristics of a good text based adventure game? What would your ideal text adventure game contain? - Deep, well thought out (philosophical maybe?) - An intriguing story of course, - Something I’ve never seen or thought of before – Text based games are so much more flexible than their pixelly counterparts simply due to the fact they require imagination on the part of the author and the player. How do you write interactive storytelling? Here are 8 easy planning tips for writing your first interactive novel. - Pick a cool setting. - Create a main character with a goal, but without too many details. - Limit characters. - You may not need or want a villain. - Include a handful of helping items. - Plant treasures. - Plant clues. - Establish several endings. How do you make a story based game? How do you write a video game? - Outline the major storyline. - Decide what type of game it will be. - Develop your world. - Create your main characters. - Create a flowchart of your major story. - Start writing the major story. - Add in side quests, NPCs, and other small details. - Experience playing video games. How do you make a game like choices? They are basically narrative based games where user feels he is deciding the how the story unfolds in the game. Possible ways to add an extra dimension of decision making to a tactics game? - attack enemy. - defend against enemy attack. - support other teammates (heal, buff spells, etc) What is faster go or Java? Go is faster than Java on almost every benchmark. This is due to how it is compiled: Go doesn’t rely on a virtual machine to compile its code. It gets compiled directly into a binary file. On a benchmark test to calculate factorials, by Sunny Radadiya, Go performed better than Java. Is Java too slow? Modern Java is one of the fastest languages, even though it is still a memory hog. If you still think Java is slow, see the benchmarks game results. Tightly optimized code written in a ahead-of-time compiled language (C, Fortran, etc.) can beat it; however, Java can be more than 10x as fast as PHP, Ruby, Python, etc. Why is Java bad for games? The problem with Java is handling of memory management and the resulting GC pauses. Even a seemingly basic game like Minecraft has big issue with that. You can be slow (which Java isn’t, really), but it’s a no go when keeping larger state means from time to time you freeze the game to collect objects.
OPCFW_CODE
Not all The big idea of that type are good startup ideas, but nearly all good startup ideas are of that type. For the past three decades, this belief that wellbeing should take preference over material growth has remained a global oddity. By hitting the fertile overlap between pragmatic and utopia, we architects once again find the freedom to change the surface of our planet, to better fit contemporary life forms. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. They were going to let hosts rent out space on their floors during conventions. One of the biggest dangers of not using the organic method is the example of the organic method. At YC we call these "made-up" or "sitcom" startup ideas. Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. So you have two choices about the shape of hole you start with. Most recently she was the Design Leader for a residential complex in Hualien, Taiwan that seeks to blur the line between natural landscape and the built environment. You can also be at the leading edge as a user. Particularly as you get older and more experienced. Do you find it hard to come up with good ideas involving databases? Lots forgot USB sticks. It also faces an increasingly uncertain future. Peabody and Sherman host a zany late-night comedy show from their swanky penthouse - now streaming only on Netflix! Not only do they learn that nobody wants what they are building, they very often come back with a real idea that they discovered in the process of trying to sell the bad idea. Get funded by Y Combinator. The best plan may be just to keep a background process running, looking for things that seem to be missing. At the same time they want to travel the world, listen to Korean pop music and watch Rambo. Make something unsexy that people will pay you for. After completing architectural studies at California Polytechnic University, Leon has worked with renowned offices in Japan, Scandinavia, and Portugal, designing a variety of cultural, residential and master planning projects around the globe, including the New Oslo Central Station and the Ginza Swatch Building in Tokyo. Which in turn is why search engines are so much better than enterprise software. Whereas a PhD dissertation is extremely unlikely to. The person who needs something may not know exactly what they need. Run (Accesskey R) Save (Accesskey S) Download Fresh URL Open Local Reset (Accesskey X). Buy Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy on bsaconcordia.com FREE SHIPPING on qualified orders. Ray Ozzie thinks his Clear method for unlocking encrypted devices can attain the impossible: It satisfies both law enforcement and privacy purists. You can start any of these home based businesses for less than $5,Download
OPCFW_CODE
|In this Heavy and yet incomplete Issue: Mike Wolf, Walter Ferrari, Colin Eberhardt, Mathew Charles, Don Burnett, Senthil Kumar, cherylws, Rob Miles, Derik Whittaker, Thomas Martinsen(-2-), Jason Ginchereau, Vishal Nayan, and WindowsPhoneGeek. Above the Fold: ||"Automatically Showing ToolTips on a Trimmed TextBlock (Silverlight)" ||"Windows Phone Blue Book Pdf" ||"Discover Sharepoint with Silverlight - Part 1" Dave Isbitski has announced a WP7 Firestarter, check for your local MS office: Announcing the “Light up your Silverlight Applications for Windows 7 Firestarter” - Leveraging Silverlight in the USA TODAY Windows 7-Based Slate App - Mike Wolf has a post up about Cynergy's release of the new USA TODAY software for Windows 7 Slate devices, and gives a great rundown of all the resources, and how specific Silverlight features were used... tons of outstanding external links here! - Discover Sharepoint with Silverlight - Part 1 - Walter Ferrari has tutorial up at SilverlightShow... looks like the first in a series on Silverlight and Sharepoint... lots of low-level info about the internals and using them. - Automatically Showing ToolTips on a Trimmed TextBlock (Silverlight) - Colin Eberhardt has a really cool AutoTooltip attached behavior that gives a tooltip of the actual text if text is trimmed ... and has an active demo on the post... very cool. - RIA Services Output Caching - Mathew Charles digs into a RIA feature that hasn't gotten any blog love: output caching, describing all the ins and outs of improving the performance of your app using caching. - Emailing your Files to Box.net Cloud Storage with WP7 - Don Burnett details out everything you need to do to get Box.Net and your WP7 setup to talk to each other. - Shortcuts keys for Developing on Windows Phone 7 Emulator - Senthil Kumar has some good WP7 posts up ... this one is a cheatsheet list of Function-key assignements for the WP7 emulator... another sidebar listint - Windows Phone 7 Design Guidelines – Cheat Sheet - cherylws has a great Guideline list/Cheat Sheet up for reference while building a WP7 app... this is a great reference... I'm adding it to the Right-hand sidebar of WynApse.com - Windows Phone Blue Book Pdf - Rob Miles has added another book and color to his collection of both -- Windows Phone Programming in C#, also known as the Windows Phone Blue Book... get a copy from the links he gives, and check out his other free books as well. - Navigating to an external URL using the HyperlinkButton - Derik Whittaker has a post up discussing the woes (and error messages) of trying to navigate to an external URL with the Hyperlink button in WP7, plus his MVVM-friendly solution that you can download. - Set Source on Image from code in Silverlight - Thomas Martinsen has a couple posts up... first is this quick one on the code required to set an image source. - Show UI element based on authentication - Thomas Martinsen's latest is one on a BoolToVisibilityConverter allowing a boolean indicator of Authentication to be used to control the visibility of a button (in the sample) - WP7 ReorderListBox improvements: rearrange animations and more - Jason Ginchereau has updated his ReorderListBox from last week to add some animations (fading/sliding) during the rearrangement. - Navigation in Silverlight Without Using Navigation Framework - Vishal Nayan has a post that attracted my attention... Navigation by manipulating RootVisual content... I've been knee-deep in similar code in Prism this week (and why my blogging is off) ... - Creating a WP7 Custom Control in 7 Steps - WindowsPhoneGeek creates a simple custom control for WP7 before your very eyes in his latest post, focusing on the minimum requirements necessary for writing a Custom Control. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight Silverlight 3 Silverlight 4 Windows Phone MIX10
OPCFW_CODE
New Recipe: Intervention May I suggest that, before David puts more meat on these bones, that we get ourselves clear on the typical lifecycle which this recipe fits into. I suspect that we might also think about whether "intervention" is the right terminology. The word has been used rather loosly and I think we should only use it to refer to a record of an act by a person with an intention to help. We should maybe also think about whether a teacher correcting work is logically also an "intervention", and clarify whether this is in scope. The typical lifecycle should be discernable in the SSP workflow, and we can also indicate the Tribal Student Insight - Student Information Desk flow. The key thing is to be clear where in a case management scenario the recipe sits, and whether there should be a series of recipes. Agree with @arc12 . I have left the 'interventon' draft without any meat intentionally as it wasn't fleshed out much at the last meeting and I don't have experience with the intervention systems. The current draft has the system as the actor issuing an intervention, with instructor in the context as the person issuing it, but from the way I think you are describing the act is by the person intending to help, so the actor is the person making the intervention and the intervention system is the context. Could do with some guidance going forward Our related application is: http://www.tribalgroup.com/higher-education/customer-services-software/student-information-desk/ from ticket #126 via @arc12 but belongs here; @ds10 it might work, it depends on what the verb really is. As I've said before I dont like "intervened"; its too wooly. Things we might record. tutor - interviewed - student (an actual meeting for pastoral care or guidance) tutor - raised a concern about - student (equivalent to a LAP alert?) tutor - assigned remedial work to - student These feel less awkward than: tutor - intervened on - student From an internal Cetis call, questions for monday: Is this the same issue as #126, does having a student as the actor bring up activity when they are being acted on. If so, what is the approach for 126 and does it fit here. From Monday 5 Dec call: Generic 'intervene' verb needed, so that these can be queried easily. More details about the type of intervention could be put in the Context. Create some examples. Ahh - I had mentally noted, from the call: specific verbs needed 2, use context to somehow identify the umbrella, for ease of querying Actually 1: I think querying is one issue, but reconstructing a sequence is really part of the desirable path. Actually 2: the "umbrella" could well simply be to know that the statements came from a "case management" system, i.e. via a minted URI for that kind of application. Another issue here is that some cases might have particulary confidential interactions which should not be propagated to the LRW. Yes, we're thinking that using Context for detail is counter to the essence of Context - because Context is more to do with a 'wider system' than it is to do with a narrower one. So, Context for the umbrella seems in line with the xAPI meaning. However, that still leaves the difficulty of recording the detail, if that's required. One could imagine some form of extra 'source verb' type of construct (like a 'source data' in UDD), but I'm not yet sure how or whether that would work or be desirable. Examples needed, I think. How many realistically different intervention verbs might there be? @alanepaull I suspect not very many verbs in the recipe. My guess is that case management tools are quite customisable and that there is a potentially unbounded set of possibilities. It would be nice to think that many of these are nuances over a small set of acts which are functionally very similar. e.g. the making of an appointment to see an advisor, and the attending of the event. e.g. a communication by some medium from or to a student. e.g. the raising of a concern/alert by a person or machine. The nuance might be important from a pastoral care POV, but proliferation will force the data processors to ignore the nuances (IMO) @alanepaull @arc12 My notes seemed to have a collection of different (and contradictory) ways we could go forward. I've put together some drafts to talk about. Opened a ticket for the skype call at #130 @ds10 - are those 01:00 and 03:00 ? I'll be in bed! @arc12 oops! I've changed the timings but it has broken your response, sorry! Notes: Intervention recipe is likly to grow as the general as we write statement statement templates. The general approach to this (to not have an intervention verb) has been roughly decided and a rough 'interview' statement has been knocked up. Keeping this open for now as a reminder, as closed tickets #126 #130 @pbailey64 was asing where we were with intervention templates. Adam, myself, Dai, Lee and Alan had a look at this late last year. We had 3 candidates which can be found in the interventions branch, in the end we went with candidate b . A fleshed out example can be found in an interview intervention: The notes from candidate B say: -The actor is the alerting system or tutor -Specific verbs are used in the verb entity to describe the intervention -Object contains the student -we use context to somehow identify the umbrella, for ease of querying I think this was put on the backburner to work on requirements for 1.0. I think it needs input from domain experts on what interventions can take place and what the iri are etc. A few weeks ago we decided this was to be revisited looking at some new data. I think @michaelwebbjisc was going to post some headers? This doesn't have a milestone next to it, but the last comments sugguest @michaelwebbjisc had some new data for this? Is it possible to have this. Is 1.1 a milestone for this? I had a conversation with Greenwich last week where they wanted to classify things like giving students a skills course or an induction as an intervention even though it wouldn't have been necessarily triggered by anything. They were interested in capturing it as a blanket intervention before they started assessing whether or not to determine who gets it based on other data...
GITHUB_ARCHIVE
How Do I Draw A Slightly Diagonal Gradient Fill in PHP? I found the following function can draw a vertical gradient in PHP. However, many web designers like their gradients to have an upper-left lightsource to make the gradient look more realistic. So, how do I change the angle slightly on the vertical gradient, making it into a slight gradient? I don't want to completely overdo it, but just a slight movement to the right as it proceeds down the vertical gradient. <?php function hex2rgb($sColor) { $sColor = str_replace('#','',$sColor); $nLen = strlen($sColor) / 3; $anRGB = array(); $anRGB[]=hexdec(str_repeat(substr($sColor,0,$nLen),2/$nLen)); $anRGB[]=hexdec(str_repeat(substr($sColor,$nLen,$nLen),2/$nLen)); $anRGB[]=hexdec(str_repeat(substr($sColor,2*$nLen,$nLen),2/$nLen)); return $anRGB; } $nWidth = 960; $nHeight = 250; $sStartColor = '#2b8ae1'; $sEndColor = '#0054a1'; $nStep = 1; $hImage = imagecreatetruecolor($nWidth,$nHeight); $nRows = imagesy($hImage); $nCols = imagesx($hImage); list($r1,$g1,$b1) = hex2rgb($sStartColor); list($r2,$g2,$b2) = hex2rgb($sEndColor); $nOld_r = 0; $nOld_g = 0; $nOld_b = 0; for ( $i = 0; $i < $nRows; $i=$i+1+$nStep ) { $r = ( $r2 - $r1 != 0 ) ? intval( $r1 + ( $r2 - $r1 ) * ( $i / $nRows ) ): $r1; $g = ( $g2 - $g1 != 0 ) ? intval( $g1 + ( $g2 - $g1 ) * ( $i / $nRows ) ): $g1; $b = ( $b2 - $b1 != 0 ) ? intval( $b1 + ( $b2 - $b1 ) * ( $i / $nRows ) ): $b1; if ( "$nOld_r,$nOld_g,$nOld_b" != "$r,$g,$b") { $hFill = imagecolorallocate( $hImage, $r, $g, $b ); } imagefilledrectangle($hImage, 0, $i, $nCols, $i+$nStep, $hFill); $nOld_r= $r; $nOld_g= $g; $nOld_b= $b; } header("Content-type: image/png"); imagepng($hImage); The following snippet runs much faster than the GD libraries and without the complexity. You have to install the ImageMagick extension for PHP, though. $oImage = new Imagick(); $oImage->newPseudoImage(1000, 400, 'gradient:#09F-#048' ); $oImage->rotateImage(new ImagickPixel(), -3); $oImage->cropImage(960, 250, 25, 100); $oImage->setImageFormat('png'); header( "Content-Type: image/png" ); echo $oImage; I'm not going to do the geometry - but create the vertical gradient as a larger image then rotate and crop it: ... $degrees = -5; $newImage = imagecreatetruecolor($nWidth, $nHeight); $rotated = imagerotate($hImage, $degrees, 0); imagecopy($newImage, $rotated, 0, 0, $x, $y, $width, $height) imagerotate() is not there for me on PHP 5.2.4. It says on the php.net page for this function that it is one of the GD library functions that has a memory leak and is not included with Ubuntu (which is what I'm running). Got another option? http://www.php.net/manual/en/function.imagerotate.php#93151 I've never used this function - but someone has posted an alternate imageRotate function to solve this problem that looks quite promising. I tried many of those and found that imagerotateEquivalent() did the trick! Thanks, thetaiko. Just wanted to add to thetaiko's advice that if I make $nWidth and $nHeight double their size at first (at the top of my code), then set $x to 25 and $y to 171 in his code, then $width and $height in his code to half of $nWidth and $nHeight, it resolves this problem 100% using imagerotateEquivalent. This isn't thread-safe, but PHP can shell out and do this command on Linux if ImageMagick is installed: convert -size 1000x400 gradient:#bfb-#4b4 -rotate -5 +repage -crop '960x250+25+100' greenbar.png. However, PHP does have an ImageMagick extension to do this internally.
STACK_EXCHANGE
Catching up to test network gets stuck 2021-10-28 13:24:04,786 Info [agora.consensus.Ledger] - Beginning externalization of block #693 2021-10-28 13:24:04,786 Info [agora.consensus.Ledger] - Transactions: 9 - Enrollments: 0 2021-10-28 13:24:04,786 Info [agora.consensus.Ledger] - Validators: Active: 5 - Signing: 11111 - Slashed: 0 2021-10-28 13:24:04,796 Info [agora.consensus.Ledger] - Completed externalization of block #693 2021-10-28 13:24:04,831 Info [agora.consensus.Ledger] - Beginning externalization of block #694 2021-10-28 13:24:04,831 Info [agora.consensus.Ledger] - Transactions: 1 - Enrollments: 0 2021-10-28 13:24:04,831 Info [agora.consensus.Ledger] - Validators: Active: 5 - Signing: 11111 - Slashed: 0 2021-10-28 13:24:04,841 Info [agora.consensus.Ledger] - Completed externalization of block #694 2021-10-28 13:24:04,889 Info [agora.consensus.Ledger] - Beginning externalization of block #695 2021-10-28 13:24:04,889 Info [agora.consensus.Ledger] - Transactions: 1 - Enrollments: 0 2021-10-28 13:24:04,889 Info [agora.consensus.Ledger] - Validators: Active: 5 - Signing: 11111 - Slashed: 0 2021-10-28 13:24:04,900 Info [agora.consensus.Ledger] - Completed externalization of block #695 2021-10-28 13:24:04,925 Info [agora.consensus.Ledger] - Beginning externalization of block #696 2021-10-28 13:24:04,925 Info [agora.consensus.Ledger] - Transactions: 4 - Enrollments: 0 2021-10-28 13:24:04,925 Info [agora.consensus.Ledger] - Validators: Active: 5 - Signing: 11111 - Slashed: 0 2021-10-28 13:24:04,934 Info [agora.consensus.Ledger] - Completed externalization of block #696 2021-10-28 13:24:07,287 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:12,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:17,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:22,287 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:27,287 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:32,287 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:37,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:42,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:47,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:52,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:24:57,287 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:25:02,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:25:07,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:25:12,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing 2021-10-28 13:25:17,286 Info [agora.network.Manager] - Doing periodic network discovery: 0 required peers requested, 0 missing this happened on 300 something as well, restarting node helps I tested catching up to PROD when it was running v0.26.1 and it was able to catchup to block 1260 without getting stuck. However these blocks were empty except for rewards every payout block. Next to try with some larger transactions in the blocks. Also tested catchup after starting Faucet with transactions every 10 secs with split count of 2. There was no issue seen so far. I will wait until there are more than 500 blocks with transactions to test again. It happened every 330 blocks or something. Just completed a catch-up of 1696 blocks of which the last 400 had transactions (total of about 5000). Will test again tomorrow but so far it looks ok. Looks like the problem was server side and has been resolved.
GITHUB_ARCHIVE
When I opened a few files that had been created in Word , I noticed that under Inspect Document it said custom XML data was found. What is Custom XML Data in MS Word Document - Office · Ask Question. 1. 1. While inspecting a Word Document originally created in Office but opened and edited Word does not save as Word XML. Just what is the 'Custom XML' feature that Microsoft has to remove from Office Word takes the XML and makes it into the document that you see and print. it can be hard to find a piece of data like an invoice number inside a document. Microsoft says that Office has the Custom XML feature. word update custom xml It reported that Custom XML data was found. I don;t know what that Word or probably wouldn't have this issue. How do I find and. Custom XML data Documents can contain custom XML data that is not visible Note: In Word Starter , Document Inspector removes only. Starting with the Microsoft Office system, Because XML stores data in a text format instead of a In Office , Office , or Office Show the Developer tab. How to use XML parts in Word , and add-ins. A sample Custom XML Parts enable you to embed data into Word documents. So, I tried it out using the toolkit and it works great in MS Word ! The as soon as the XML data is changed the Word document shows. If the user then saves their package - and does not have a back-up - this data will Word is then unable to remove the custom XML elements from the Perfectus time that Microsoft Word will not support the customized XML feature as. excel docprops custom xml Custom XML for Word and the i4i patent case Windward Reports never used Custom XML (or bookmarks) for tagging. (We looked at both but. Ulodesk Ulodesk is offline Custom XML data Windows 7 64bit Custom XML data Office 64bit. Word Expert Cert. Custom XML data. More recent article on Custom XML (Jan ): Bad surprise in Microsoft Office binary documents: interoperability remains impossible You can define your data using XML Schema syntax, and then you can use that data. Custom part of microsoft word document is used to store custom data and it is not suprising that data format is XML. In connection to Custom. Word enables you to store XML data, named custom XML parts, in a document. You can . To attach the XML schema to the document (Word ). Activate. I received the message Custom XML Data was found. How can I find where this is in the document so that I can see what it consists of. First, according to this post, Custom XML is a Word-only thing. You can define your data using XML Schema syntax, and then you can It sounds like Microsoft is not anticipating the case to delay the release of Office This post describes how you can create a Microsoft Office Word (// ) document by merging a Word template and a custom xml. TableProperties we can use custom XML Data in a Microsoft PowerPoint presentation. Microsoft Office offer some powerful tools. Office Open XML is a zipped, XML-based file format developed by Microsoft for representing Microsoft Office provides read support for ECMA, read/ write support for ISO/IEC . examples and diagrams; Describes the custom XML data-storing facility within a package to support integration with business data.
OPCFW_CODE
Does the right against self incrimination have any bearing on FOIA requests? Does the right against self incrimination have any bearing on FOIA requests? I mean, if I request something from a government official, and the documents relate to criminal activity or theoretically criminal activity, does the 5th amendment come into play at all? The wording of your question is rather vague. Are you thinking of some situation where the content of the FOIA request, or the act of filing it, would somehow incriminate the person doing the filing? For instance, Joe murders someone, hides the body in a sewer, and then files an FOIA request inquiring as to how often sewers are inspected? Or are you thinking that the request asks for material which might incriminate the official responsible for responding to the request, and therefore the official might have a 5th amendment right not to respond? Well I'm thinking of a case where an official is statutorily required to create a certain document...call it an "annual report." If the official never created it, and is asked to furnish it, he will either have to furnish it or admit that the document was never created in the first place. Suppose either there is or there isnt a "criminal" penalty for failure to create such report I might ask for a list of all illegal bribes paid by some goverrnment agency (hoping the response would be "No illegal bribes were paid"). The person supposed to write the response to the FOIA request might have paid an illegal bribe. On the other hand, I think the person writing the response would be supposed to base the response on "official" knowledge of the agency, not his personal knowledge. So he might go through all official documents and might write his report based on this. With a FOIA requuest, you don't ask a government official, you ask a government agency. A government agency isn't protected by the 5th amendment. In practice, the response to a request isn't provided by an agency, but by an employee of that agency. That employee should respond based on the knowledge of the agency. If the employee has any private knowledge, that wouldn't become part of the response. That applies if the employee has private knowledge of a crime. If that crime was committed by someone else, it might have been illegal not to report the crime, but that is independent of the FOIA request. Now let's say the agency has knowledge of a crime that "the agency" committed. (In reality I would assume some member of the agency did). Since an agency is not protected by the 5th amendment, it has to be part of the response. Now let's say the agency has knowledge of a crime that the employee writing the response committed. That's when self incrimination comes into play. I don't think 5th amendment allows you to lie, including lying by omission. So quite possibly that employee can say "I'm not going to write the response to the FOIA request". In that case, the next employee would have to write the response, and that employee wouldn't be incriminating himself. Now if all employees committed a crime together, then they might all be able to refuse to write the reply, but the agency still has to respond, so they might have to request outside help :-) What if, for example, you had filled out an opinion survey while leaving the Lincoln Memorial, and on that survey, you admitted something that, while not any indication of a crime by itself, when seen by others in the context of a criminal case against you, that writing/survey would essentially amount to you incriminating yourself by your own words -- what would happen there? Would the FOIA bar that from being released? My guess is it would not, it would only be barred (or have the potential to, at least) from the court itself as inadmissible due to self-incrimination. But IANAL
STACK_EXCHANGE
Which Base items are available without DLC or marketplace purchase? I want to farm for Base items, but I'm not sure how to go about it. I'm willing to slog through side missions, beat up mobs of AI for drops, or hoard cash for a specific vendor. I'd like to avoid paying any real money, either for the items themselves, or for DLC packs that open missions where the items are available, etc (I've already spent enough real money on other things in the game...for now). There have been multiple DLC additions to DCUO relating to Bases/Lairs, and a lot of the Base items that exist seem to come from areas that aren't available immediately, according to this thread. I'm not sure how complete that list is (I'm guessing not very), it's just one of the only sources I could find. But, it's hard to tell just by the location of the item drop whether or not the location is in a DLC episode area, an On-Duty scenario, or maybe just a mission-specific area I haven't reached. I don't want to go looking for items I'm not going to find, so I'm wondering what's even available for me, playing without a membership or DLC mission purchases. What Base customization items (themed bundles or individual items) are available for purchase with in-game currency, in non-DLC areas? Are there any scripted Base item drops in non-DLC areas? I would imagine this to be a relatively short list, and I don't need every single item, but if I'm off base and this is a huge list question I will gladly edit / close. I also added "scripted" in the last part because if an item is only available via truly random drop, it would be cool to know about but impossible to farm for, so those aren't necessary to list IMO All items are available with in-game currency via the Broker. If you are a Premium player, they are also available for trade. They can also be traded via League banks. Items drop throughout the game, but I assume you're referring to the base items for the Base feats. There is one that is lockbox-only: all the items in the Stuffed feat. The rest can be farmed elsewhere, but it is unlikely that you will find a complete set by farming only free areas. As you can see when new content comes out, the base items in loot tables frequently change. It may be that only content-related items change, though, while older base items remain un-rotated. Rusty Shackleford. Nice. What makes you think you know who I am? Who do you work for? XD
STACK_EXCHANGE
package vector import "math" // CAdd adds constant to each element and returns new vector func (a Vector32) CAdd(c float32) Vector32 { out := make(Vector32, len(a)) for i, value := range a { out[i] = value + c } return out } // CSub subtracts constant from each element and returns new vector func (a Vector32) CSub(c float32) Vector32 { out := make(Vector32, len(a)) for i, value := range a { out[i] = value - c } return out } // CMul multiplies by constant for each element and returns new vector func (a Vector32) CMul(c float32) Vector32 { out := make(Vector32, len(a)) for i, value := range a { out[i] = value * c } return out } // CPow take each element to power of c and returns new vector func (a Vector32) CPow(c float32) Vector32 { out := make(Vector32, len(a)) for i, value := range a { out[i] = float32(math.Pow(float64(value), float64(c))) } return out }
STACK_EDU
Address RATS Report Hi @Omkar20895 the report below is from a release auditing tool which I ran the source code over. If you have some time can you please take a look at some of the issues so we can discuss them? Thanks Entries in perl database: 33 Entries in ruby database: 46 Entries in python database: 62 Entries in c database: 334 Entries in php database: 55 Analyzing 20160804.090751.29944/podaacpy-master/podaac/podaac.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/podaac_data_source.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/podaac_utils.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/__init__.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/mcc.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/tests/mcc_test.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/tests/__init__.py Analyzing 20160804.090751.29944/podaacpy-master/podaac/tests/podaac_test.py Analyzing 20160804.090751.29944/podaacpy-master/docs/source/conf.py Analyzing 20160804.090751.29944/podaacpy-master/setup.py RATS results. Severity: Medium Issue: remove A function call is not being made here, but a reference is being made to a name that is normally a vulnerable function. It could be being assigned as a pointer to function. File: 20160804.090751.29944/podaacpy-master/podaac/podaac_utils.py Lines: 39 60 141 163 185 207 File: 20160804.090751.29944/podaacpy-master/podaac/tests/podaac_test.py Lines: 116 118 136 Severity: Medium Issue: open A function call is not being made here, but a reference is being made to a name that is normally a vulnerable function. It could be being assigned as a pointer to function. File: 20160804.090751.29944/podaacpy-master/podaac/mcc.py Lines: 87 File: 20160804.090751.29944/podaacpy-master/setup.py Lines: 55 62 Severity: Medium Issue: read A function call is not being made here, but a reference is being made to a name that is normally a vulnerable function. It could be being assigned as a pointer to function. File: 20160804.090751.29944/podaacpy-master/setup.py Lines: 55 If the above area is blank, please see the RATSweb Processing Notes Inputs detected at the following points 20160804.090751.29944/podaacpy-master/setup.py: Line 55: function read Double check to be sure that all input accepted from an external data source does not exceed the limits of the variable being used to hold it. Also make sure that the input cannot be used in such a manner as to alter your program's behaviour in an undesirable way. Total lines analyzed: 1861 Total time 0.008923 seconds 208562 lines per second @lewismc shall we discuss this?? Based on the fact that the API is essentially changing, I am closing this off.
GITHUB_ARCHIVE
Passing ActualWidth and ActualHeight into a MultiBinding I have a WPF project (c#, MVVM using MVVM Light) and I have a MultiBinding that passes information to a converter, as you would expect. The code for that is here: <ListBox.ItemTemplate> <DataTemplate> <Grid> <Path Stroke = "{Binding LineColour}" > < Path.Data > < MultiBinding Converter="{StaticResource NodeToPathDataConverter}"> <Binding Path = "NodeListViewModel.NodeList" Source="{StaticResource Locator}" /> <Binding /> </MultiBinding> </Path.Data> </Path> </Grid> </DataTemplate> </ListBox.ItemTemplate> The converter then uses the information from these bindings to spit out the details for the line. The problem is that the line goes from the (0,0) position of one control to another, but I want it to go from the middle of the controls in question. To do this, the converter in the MultiBinding needs to get the ActualWidth and ActualHeight of the controls in question. So what are the controls in question? I have another almost identical ListBox control below this one that uses the same data set, but instead of drawing lines between controls, it draws the controls themselves. This is as follows: <ListBox.ItemTemplate> <DataTemplate> <Grid> <Thumb Name = "myThumb" Template="{StaticResource NodeVisualTemplate}"> <i:Interaction.Triggers> <i:EventTrigger EventName = "DragDelta" > < cmd:EventToCommand Command = "{Binding NodeListViewModel.DragDeltaCommand, Source={StaticResource Locator}}" PassEventArgsToCommand="True"/> </i:EventTrigger> </i:Interaction.Triggers> </Thumb> </Grid> </DataTemplate> </ListBox.ItemTemplate> The reason I used two ListBox controls was to ensure that all lines appeared lower than all Thumbs. I tried many ways of them in the same ListBox, but it wasn't going to happen. They do share the same data set though, which is a 'NodeList'. So the bit I can't figure out is how to get the ActualWidth and ActualHeight of the Thumbs in the second ListBox into the converter in the first ListBox. Which is why I'm here. Any help would be greatly appreciated. It might be simpler to somehow center the Thumbs at their coordinate origin, e.g. by putting them centered in a large enough Grid with fixed size and fixed (half of the size) negative margin. Too broad, especially without more context (like a good [mcve]). That said, to address your literal question, if both ListBox.ItemsSource collections are bound to the same source, then your view model could mediate, by having a Mode=OneWayToSource binding to Actual... properties in the second list box and Mode=OneWay in the first. Note that a possibly better approach here would be to not use the first ListBox, but rather to put the graphics from that in the adorner layer. You should be able to bind to the adorner from the same template as used in the second ListBox. Indeed Peter, but I have been regularly been downvoted for providing 'too much information' so I've been trying to figure out just how much is 'enough'. I would say more was better here, but I'm concerned with visibility as is anyone. You can "bind" the ActualWidth and ActualHeight read-only properties of the Thumb in the second ListView to some double source properties of your data object using any of the workaround mentioned here: Bind to ActualHeight of Item ItemsControl You can then bind to the same source properties in the ItemTemplate of the first ListView. This should work provided that both ListViews bind to the same source collection. There is no way to bind directly to the actual Thumb element in the second ListView using pure XAML though.
STACK_EXCHANGE
UPDATE: We have launched a new fundraising campaign to fund Synfig development for next month. Please check it out here. Hello, my name is Konstantin Dmitriev. I am running this campaign to fund the development of Synfig - the free and opensource animation software. I am mentoring a full-time developer Ivan Mahonin, who is working on Synfig code. We have funded his work in previous months by running similar fundraising campaigns in September, October and November. Thanks to success of those campaigns, Ivan have implemented following features for Synfig: - Single-Window UI - Bones tools The success of this campaign will allow Ivan to work full-time for a whole month in January 2014. Also, this campaign allows our users to directly influence the Synfig development by choosing development priorities. Please read below. "Choose Priority" perk For a donation of fixed amount (see the perks on the right) we offer to the donor an exceptional privilege to choose a priority of development. So, we will spend next month working on the priority you defined. Of course, we can't promise any thing, so please select from the list below: - Frame-by-frame animation support (bitmap only) - Sound support - Fix bugs and work towards releasing the stable version NOTE: We don't guarantee the feature from a selected priority to be completely finished within a month. Such promises are not compatible with development realities. What we promise is to dedicate no less than 85% of our working time to developing this priority for a full month. Though, one month is a significant period and such dedication would lead to significant changes in selected direction. Many our users are aware that we used to publish development snapshots regularly, so you will be able to benefit from development results as soon as possible. EXAMPLE: As result of our earlier fundraising campaign, we got sold the "Choose priority" perk for October. The priority chosen by donor was "Single-window UI". You can see results in this video. "Development OS" perk By default we use Linux operation system (OS) for daily development (because we love Linux). As result, the Linux version of Synfig is the most polished, because it goes through daily proof. But, for a fixed-amount donation (see the perks on the right) you can make us shift to other OS for next month. So, if we will use the OS of your choice in our daily development, then obviously the Synfig version will be especially polished to work on it: daily proof => more testing => less bugs => better stability. No worries, this perk doesn't mean we drop support for other operation systems. It's just a question of choosing the OS for daily development. At the moment the only OS available for choice is Windows. We understand that there are people who would like to support our efforts by other means. Specially for them we prepared other perks to express our gratitude: - Tutorial video. A tutorial video about Synfig for ANY given theme. Length is limited to 5 minutes. Languages can be English or Russian. Video will be publicly available for everyone. The tutorial video will be produced by Nikolai Mamashev, who was in charge for producing Synfig Training Package. - Report sponsor. You can track the progress of the development by reading my weekly reports. If you claim this perk, then one of my weekly reports will be indicated as sponsored by you with an (optional) link to your website. The mention will look as "This report is sponsored by: ..." text at the top of the report text. Sponsor logo. Your logo displayed for one month at the front page of Synfig website. How much do we need? Our monthly funding amount is $1300, but the goal is set to $1245 because we have extra $55 collected from our previous campaign. It could happen that the campaign will go beyond the target goal, so I would like to outline the important funding milestones: - $1245 is enough to fund our work until February 1st, 2014 - $2545 - until March 1st, 2014. - $3845 - until April 1st, 2014. - $5145 - until May 1st, 2014. Who is behind this campaign Konstantin Dmitriev. This is me. I am the person who is responsible for distributing collected funds and mentoring the development process. I am an open-source activist with a special profile for animation. The range of my activities include expertize, development and popularization of free software animation solutions. You can see following my works made as animator purely with free software: Also I have a profile in education by teaching animation to kids using libre and opensource tools. Here are some of the works made by my students: The sources for many of the works above are freely available under the Creative Commons license. Ivan Mahonin. He is a professional C/C++ developer, hired to work on Synfig code. You can track current Ivan's contributions to Synfig on this page. Nikolai Mamashev. He is a "wizard of Synfig", awesome artist who use free software only. If you are user Synfig, then you know at least one his work - the current splash screen is created by him. He is known as lead artist in Morevna Project: Demo, also it worths to look at his portfolio. Nikolai is in charge for providing artistic support for us, proof testing and tutorials production. With this campaign we would like to provide a way to control the direction of development and support vitality of Synfig development at the same time. Users can choose the priorities and all collected funds will go to support further sustainable development of Synfig. It's fair and benefit is for everyone - we all get better free animation software. You can track the results of our work by reading my weekly reports. Also, we are open to consider other models of funding for Synfig. If you are would like to sponsor the implementation of particular feature - please contact us (see the comments tab at the top).
OPCFW_CODE
Strip html from database records I'm new to remove unwanted code to large mature women spanked tube statement is the last row. One's first identical row found i would like to export the grepper chrome extension. Numpy ndarray with the data straight from the following example, shows how to apply the information to clean up data encoded in an html form. Derived data straight from your data and protect against security concerns such as keyword density checking etc. You may need info on a database into formats that's. My query kept failing while to then return the grepper chrome extension. Removing characters from mysql database -- visual c sql code examples like remove all identical rows remove html tags. For example uses a database concepts and republicans are. Also shown is removed is because the script to be used to export your database. Registers a database for removing all records from a single row per parent row, we use the number of. Use it to remove this array, only one row is a record that would strip out of displaying records in html tags from. Finally, only removes all your table on cascading deletes. Tip: div how to only removes all the editor to remove rows selected. Beautiful soup with the second row do cool stuff, auto-increment values in pdf. Assuming that were changed with regexp, the trim function for a basic understanding of them characters. To filter to remove rows or document is used to do is there are some text in mind that when you have a string. If the database rows based on which you can save the database system to the first parameter is removed. Learn how to remove html fields and one udf to delete a row in salesforce to be removed is a row selected and translated. I need info on https://andorejapan.com/578901282/perfect-online-dating-profile-template/ to a string from db. Please note that when i strip the character set. In mongodb manual: https: this guide shows a string. Encodes a string to export the data to identify each incident reported. Alternatively, but i use striphtml function is built-in regexp_replace function strip_tags with the ability to export the string by. Well, use long text in mysql itself to enable communication between html tags. Numpy ndarray with, select your google search results with the changes to strip tags from a look at the column and alembic. Leave just a table, where the sql clr database into a web method is submit. Create a document object over all you wish to work with the number of cleaning up of the datalist does not, you have the database. Ideally also replacing things like to delete statement removing at. The data of rows in a document as keyword density checking https://loveleta.com/ Extracting text between a web method which it to optionally. For a web api in other sources ingest chart data retrieved from string data. When i can run the avalon vt-737 tube channel strip all tags. Screenshot of a t-sql code examples like to change or. Even if you're simply outputting data from the grepper chrome extension. Strips a try this suggested that it's inefficient, algolia can now remove points of columns, table will explode the text in normal text. An easy way i want to strip tags from your database project template. Well, there is used to enable communication between html tags from. Also create a 61, partial or button - remove the ui, my query? Because strip_tags function can use the answer that match an html table will be used for each incident reported. Strip html from a document Occasionally i found i see all html tags from the microsoft word to html tags but it works the source code and. Htmlstrip reads inputfile or garbage, you remove html tags from the microsoft word to a given an xml document. This program to remove document will have an open office document; filtering. Just an innovation of a document containing html tags from an array containing html tags stripped from text. I've try to add method to plain text, stripe-hosted payment page and it works perfectly for you want a pretty good. An option to get html's inner text of this will allow us to type checker. Recently i have to plain text into excel this function to remove html templates, you can use striphtml. Hold down the button - type checker mime file. Loading: enable this is hidden or unescape html tags inside them accordingly. Example uses the original html, as well as well as well as a way is that contains a secure, paragraphs. Strip wax from floor Improper technique of wax tile finisher and buffing hardwood floor. Do this happens you do not remove the floors. Ultra line equipment to strip the bag is formulated to keep your flooring. Scuff marks or become yellow due to strip the wax from the large objects i received from normal wear and dirty bucket. Legacy lodge at 0.97 - to strip and grime always can remove dirt and cover the floors stripping the ammonia. Check out their homes or bleach, do not remove dirt, you may be the layer. Depressions: cleaner to receive a strip and wax, don't work. Spread the opposite corner of your local expert strip wax. Searching for a clean towels or to remove dirt and large building, these seasons of employees getting sick from normal wear and wax. Mix from dirt and waxing, layers of extra-bill work; spraying and wax contaminated, strip/wax, helps organizations maintain their homes or k2r spot lifter. Servicemaster stripping: our step by basic steps and large building, dirt and stains, applying new clients. Buffing no finish and waxing, as linoleum, you are 10 steps to apply the floors. How to strip paint from steel Lightly press this little trick of interior and i chose it from outdoor iron but if your metal. Premium stripper like citristrip less toxic than most layer of steel desk. Here is safe to preserving the doors and corrosion from intricate, as compared to strip old paint underneath. Allow the loosened paint with 2 steel-wool or paint remover, metal bodies. When stripping gel citristrip gel citristrip less painful alternatives to strip off the work more aggressive. Go with a paint stripper definitely did what it over the piece of klean-strip aircraft. Good thick coat of ways to strip 1 pick: heat and can also cleaning products, concrete, rubbing alcohol will clean clothes. Coating by soaking off and some stripper to strip paint stripper, marble, you can uncover original appearance by laying a thick layer onto your face! Or putty knife; paintbrush; plastic or the type of the piece of old paint stripper metal, with an easier. Removes powder coating undisturbed for ornate or paint removal. Apply a disposable aluminum tray or gouging the vinegar. Steps to a chemical stripping gel over spots with a disposable aluminum siding has been stripped to repaint it starting with drop cloths. Good painters remove the key to do this method for efficiently removing paint. Just slops it down a clean metal paint stripper residue, scaled rust is either. Power washing can remove paint on my diy lead paint, use lasers use the time recommended on most of dirt. Two methods for this for ornate or stripping gel.
OPCFW_CODE
The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution (of events in a probability space)[clarification needed] can be represented as events generated by a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields. It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph. The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin and Frank Spitzer in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford, who proved the equivalence in an unpublished paper in 1971. Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett, Preston and Sherman in 1973, with a further proof by Julian Besag in 1974. It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following: In the image to the right, a Gibbs random field over the provided graph has the form . If variables and are fixed, then the global Markov property requires that: (see conditional independence), since forms a barrier between and . With and constant, where and . This implies that . To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proven: Let denote the set of all random variables under consideration, and let and denote arbitrary sets of variables. (Here, given an arbitrary set of variables , will also denote an arbitrary assignment to the variables from .) for functions and , then there exist functions and such that In other words, provides a template for further factorization of . Proof of Lemma 1 In order to use as a template to further factorize , all variables outside of need to be fixed. To this end, let be an arbitrary fixed assignment to the variables from (the variables not in ). For an arbitrary set of variables , let denote the assignment restricted to the variables from (the variables from , excluding the variables from ). Moreover, to factorize only , the other factors need to be rendered moot for the variables from . To do this, the factorization will be re-expressed as For each : is where all variables outside of have been fixed to the values prescribed by . Let and for each so What is most important is that when the values assigned to do not conflict with the values prescribed by , making "disappear" when all variables not in are fixed to the values from . Fixing all variables not in to the values from gives which finally gives: Lemma 1 provides a means of combining two different factorizations of . The local Markov property implies that for any random variable , that there exists factors and such that: where are the neighbors of node . Applying Lemma 1 repeatedly eventually factors into a product of clique potentials (see the image on the right). End of Proof - Lafferty, John D.; Mccallum, Andrew (2001). "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data". ICML. Retrieved 14 December 2014. by the fundamental theorem of random fields (Hammersley & Clifford, 1971) - Dobrushin, P. L. (1968), "The Description of a Random Field by Means of Conditional Probabilities and Conditions of Its Regularity", Theory of Probability and Its Applications, 13 (2): 197–224, doi:10.1137/1113026 - Spitzer, Frank (1971), "Markov Random Fields and Gibbs Ensembles", The American Mathematical Monthly, 78 (2): 142–154, doi:10.2307/2317621, JSTOR 2317621 - Hammersley, J. M.; Clifford, P. (1971), Markov fields on finite graphs and lattices (PDF) - Clifford, P. (1990), "Markov random fields in statistics", in Grimmett, G. R.; Welsh, D. J. A. (eds.), Disorder in Physical Systems: A Volume in Honour of John M. Hammersley, Oxford University Press, pp. 19–32, ISBN 978-0-19-853215-6, MR 1064553, retrieved 2009-05-04 - Grimmett, G. R. (1973), "A theorem about random fields", Bulletin of the London Mathematical Society, 5 (1): 81–84, CiteSeerX 10.1.1.318.3375, doi:10.1112/blms/5.1.81, MR 0329039 - Preston, C. J. (1973), "Generalized Gibbs states and Markov random fields", Advances in Applied Probability, 5 (2): 242–261, doi:10.2307/1426035, JSTOR 1426035, MR 0405645 - Sherman, S. (1973), "Markov random fields and Gibbs random fields", Israel Journal of Mathematics, 14 (1): 92–103, doi:10.1007/BF02761538, MR 0321185 - Besag, J. (1974), "Spatial interaction and the statistical analysis of lattice systems", Journal of the Royal Statistical Society, Series B, 36 (2): 192–236, JSTOR 2984812, MR 0373208 - Bilmes, Jeff (Spring 2006), Handout 2: Hammersley–Clifford (PDF), course notes from University of Washington course. - Grimmett, Geoffrey, Probability on Graphs, Chapter 7 - Langseth, Helge, The Hammersley–Clifford Theorem and its Impact on Modern Statistics (PDF)
OPCFW_CODE
If the data set is too small, no matter how it is processed , There are always fitting problems , So that the accuracy will not be so high , So it's time to introduce a pre trained model , The pre training model is usually trained by a large number of data , And the characteristic is that the selected pre training model is similar to the existing problems . A simple and old model is used here vgg16, This model is very similar to the architecture we used before . There are two ways to use pre trained networks , Feature extraction and fine tuning model . first , In the previous study , Convolution neural network consists of two parts , The first part is composed of convolution layer and pool layer , The second part is an expanded classifier , The first part simply calls it convolution basis . The latter classifier is used to classify the trained model , Its own universality is not so strong , This simple thinking will understand , So convolution basis is usually used for reuse . There is also the similarity of the problem . Logically speaking , The less layers , The simpler the characteristics of training are . So the higher the similarity , The more layers are available for existing models , If the similarity between the new problem and the existing model is not high , It can be considered to use fewer layers of existing model structure . Now we can see the specific implementation process . from keras.applications import VGG16 conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3)) Import vgg16, Setting convolution basis parameters , among weights:None Represents random initialization , That is, the pre training weight is not loaded .'imagenet’ Representative load pre training weight ;include_top Represents whether to use the second part classifier , It's not used here . Because it's a dog and cat problem , Just add it yourself ; Finally, the input size . use summary Methods take a look at the network architecture , I found it familiar : <>Layer (type) Output Shape Param # input_1 (InputLayer) (None, 150, 150, 3) 0 block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 <>block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 The final output is 4,4,512, So we add a dense link classifier on this basis , We can solve the problem of two classification , But now the problem is to optimize the model . This model cannot be enhanced with data , To use it, you have to add one at the top of the model dense layer , This will lead to a significant increase in the computational cost of model training . The two methods of not using and using data enhancement will be analyzed later .
OPCFW_CODE
Matplotlib: lotka volterra tutorial¶ |Date:||2017-03-12 (last modified), 2007-11-11 (created)| - page was renamed from LoktaVolterraTutorial This example describes how to integrate ODEs with the scipy.integrate module, and how to use the matplotlib module to plot trajectories, direction fields and other information. You can get the source code for this tutorial here: tutorial_lokta-voltera_v4.py. Presentation of the Lotka-Volterra Model¶ We will have a look at the Lotka-Volterra model, also known as the predator-prey equations, which is a pair of first order, non-linear, differential equations frequently used to describe the dynamics of biological systems in which two species interact, one a predator and the other its prey. The model was proposed independently by Alfred J. Lotka in 1925 and Vito Volterra in 1926, and can be described by du/dt = au - buv dv/dt = -cv + dbu*v with the following notations: u: number of preys (for example, rabbits) v: number of predators (for example, foxes) a, b, c, d are constant parameters defining the behavior of the population: a is the natural growing rate of rabbits, when there's no fox b is the natural dying rate of rabbits, due to predation c is the natural dying rate of fox, when there's no rabbit d is the factor describing how many caught rabbits let create a new fox We will use X=[u, v] to describe the state of both populations. Definition of the equations: #!python from numpy import * import pylab as p # Definition of parameters a = 1. b = 0.1 c = 1.5 d = 0.75 def dX_dt(X, t=0): """ Return the growth rate of fox and rabbit populations. """ return array([ a*X - b*X*X , -c*X + d*b*X*X ]) Before using !SciPy to integrate this system, we will have a closer look at position equilibrium. Equilibrium occurs when the growth rate is equal to 0. This gives two fixed points: #!python X_f0 = array([ 0. , 0.]) X_f1 = array([ c/(d*b), a/b]) all(dX_dt(X_f0) == zeros(2) ) and all(dX_dt(X_f1) == zeros(2)) # => True Stability of the fixed points¶ Near these two points, the system can be linearized: dX_dt = A_f*X where A is the Jacobian matrix evaluated at the corresponding point. We have to define the Jacobian matrix: #!python def d2X_dt2(X, t=0): """ Return the Jacobian matrix evaluated in X. """ return array([[a -b*X, -b*X ], [b*d*X , -c +b*d*X] ]) So near X_f0, which represents the extinction of both species, we have: #! python A_f0 = d2X_dt2(X_f0) # >>> array([[ 1. , -0. ], # [ 0. , -1.5]]) Near X_f0, the number of rabbits increase and the population of foxes decrease. The origin is therefore a saddle point. Near X_f1, we have: #!python A_f1 = d2X_dt2(X_f1) # >>> array([[ 0. , -2. ], # [ 0.75, 0. ]]) # whose eigenvalues are +/- sqrt(c*a).j: lambda1, lambda2 = linalg.eigvals(A_f1) # >>> (1.22474j, -1.22474j) # They are imaginary numbers. The fox and rabbit populations are periodic as follows from further # analysis. Their period is given by: T_f1 = 2*pi/abs(lambda1) # >>> 5.130199 Integrating the ODE using scipy.integrate¶ Now we will use the scipy.integrate module to integrate the ODEs. This module offers a method named odeint, which is very easy to use to integrate ODEs: #!python from scipy import integrate t = linspace(0, 15, 1000) # time X0 = array([10, 5]) # initials conditions: 10 rabbits and 5 foxes X, infodict = integrate.odeint(dX_dt, X0, t, full_output=True) infodict['message'] # >>> 'Integration successful.' `infodict` is optional, and you can omit the `full_output` argument if you don't want it. Type "info(odeint)" if you want more information about odeint inputs and outputs. We can now use Matplotlib to plot the evolution of both populations: #!python rabbits, foxes = X.T f1 = p.figure() p.plot(t, rabbits, 'r-', label='Rabbits') p.plot(t, foxes , 'b-', label='Foxes') p.grid() p.legend(loc='best') p.xlabel('time') p.ylabel('population') p.title('Evolution of fox and rabbit populations') f1.savefig('rabbits_and_foxes_1.png') The populations are indeed periodic, and their period is close to the value T_f1 that we computed. Plotting direction fields and trajectories in the phase plane¶ We will plot some trajectories in a phase plane for different starting points between X_f0 and X_f1. We will use Matplotlib's colormap to define colors for the trajectories. These colormaps are very useful to make nice plots. Have a look at ShowColormaps if you want more information. values = linspace(0.3, 0.9, 5) # position of X0 between X_f0 and X_f1 vcolors = p.cm.autumn_r(linspace(0.3, 1., len(values))) # colors for each trajectory f2 = p.figure() #------------------------------------------------------- # plot trajectories for v, col in zip(values, vcolors): X0 = v * X_f1 # starting point X = integrate.odeint( dX_dt, X0, t) # we don't need infodict here p.plot( X[:,0], X[:,1], lw=3.5*v, color=col, label='X0=(%.f, %.f)' % ( X0, X0) ) #------------------------------------------------------- # define a grid and compute direction at each point ymax = p.ylim(ymin=0) # get axis limits xmax = p.xlim(xmin=0) nb_points = 20 x = linspace(0, xmax, nb_points) y = linspace(0, ymax, nb_points) X1 , Y1 = meshgrid(x, y) # create a grid DX1, DY1 = dX_dt([X1, Y1]) # compute growth rate on the gridt M = (hypot(DX1, DY1)) # Norm of the growth rate M[ M == 0] = 1. # Avoid zero division errors DX1 /= M # Normalize each arrows DY1 /= M #------------------------------------------------------- # Drow direction fields, using matplotlib 's quiver function # I choose to plot normalized arrows and to use colors to give information on # the growth speed p.title('Trajectories and direction fields') Q = p.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=p.cm.jet) p.xlabel('Number of rabbits') p.ylabel('Number of foxes') p.legend() p.grid() p.xlim(0, xmax) p.ylim(0, ymax) f2.savefig('rabbits_and_foxes_2.png') This graph shows us that changing either the fox or the rabbit population can have an unintuitive effect. If, in order to decrease the number of rabbits, we introduce foxes, this can lead to an increase of rabbits in the long run, depending on the time of intervention. We can verify that the function IF defined below remains constant along a trajectory: #!python def IF(X): u, v = X return u**(c/a) * v * exp( -(b/a)*(d*u+v) ) # We will verify that IF remains constant for different trajectories for v in values: X0 = v * X_f1 # starting point X = integrate.odeint( dX_dt, X0, t) I = IF(X.T) # compute IF along the trajectory I_mean = I.mean() delta = 100 * (I.max()-I.min())/I_mean print 'X0=(%2.f,%2.f) => I ~ %.1f |delta = %.3G %%' % (X0, X0, I_mean, delta) # >>> X0=( 6, 3) => I ~ 20.8 |delta = 6.19E-05 % # X0=( 9, 4) => I ~ 39.4 |delta = 2.67E-05 % # X0=(12, 6) => I ~ 55.7 |delta = 1.82E-05 % # X0=(15, 8) => I ~ 66.8 |delta = 1.12E-05 % # X0=(18, 9) => I ~ 72.4 |delta = 4.68E-06 % Plotting iso-contours of IF can be a good representation of trajectories, without having to integrate the ODE #!python #------------------------------------------------------- # plot iso contours nb_points = 80 # grid size x = linspace(0, xmax, nb_points) y = linspace(0, ymax, nb_points) X2 , Y2 = meshgrid(x, y) # create the grid Z2 = IF([X2, Y2]) # compute IF on each point f3 = p.figure() CS = p.contourf(X2, Y2, Z2, cmap=p.cm.Purples_r, alpha=0.5) CS2 = p.contour(X2, Y2, Z2, colors='black', linewidths=2. ) p.clabel(CS2, inline=1, fontsize=16, fmt='%.f') p.grid() p.xlabel('Number of rabbits') p.ylabel('Number of foxes') p.ylim(1, ymax) p.xlim(1, xmax) p.title('IF contours') f3.savefig('rabbits_and_foxes_3.png') p.show() Section author: PauliVirtanen, Bhupendra
OPCFW_CODE
Via Jim Groom (Ghost in a Shell) and Tim Owens (Beyond LAMP), I note Cloudron.io, a cPanel/Installatron-like application (as far as the user is concerned) for launching dockerised applications from a digital application shelf: The experience is a bit like having a hosted version of Kitematic that lets you launch containers on that host, or a revamped version of Sandstorm (Personal Application Hosting, Dreams of a Docker AppStore, and an Incoming Sandstorm?). The applications themselves look as if they’re defined from a git repo containing a Dockerfile plus some Cloudron config info (examples). To a certain extent, this simplifies the rigmarole of launching containers on a remote host (if you use something like Docker Cloud, you need to go in to DockerCloud, launch a server, then get the container running on the server (old example – Tutum became Docker Cloud; the process remains much the same). The Docker Cloud route also allows you to launch either a single container or a stack of containers, which is to set, a set of linked containers run via Docker Compose. (For the use cases I’m interested in, we might calling such configurations linked applications). I can see how Jim is excited by the idea of Cloudron as a way of extending the hosting service opportunities offered by Reclaim Hosting: it opens up the possibility of allowing users to host applications defined via Dockerfiles, rather than just the applications configured for use via cPanel. But this is still exactly not what I am interested in. Cloudron (and cPanel) provide UIs to allow “mortals” to start self hosting web based applications that they can start once, use thereafter. For example, you can use Cloudron to self-host your own version of WordPress. Every so often you go to your (self-hosted) WordPress blog and write a blog post, but the rest of the time it just sits there, running, and serving blog post web pages to your loyal readers or passing web search traffic. But what I am interested in is are applications that I start when I want to use them, use them, then quit them (a start-use-quit model). For example, consider something like Microsoft Word, as used to create or edit a text document. There are various ways of doing this: - Using my desktop version of Word, I would probably: start the application, create the document, save the document, close the application. - Using Office 365, a permanently running Word editor in the cloud, I would login to Office 365 via my browser, create the document, save it to my Office 365 online file area, and then close the browser tab (Office 365 is still running in the cloud). But what if I wanted to have my own version of Word that I wanted to run in the cloud, much as I run my own copy of the Word application on my desktop? If I was to run it permanently, as Office 365 runs permanently, as a self-hosted application like WordPress runs permanently, I would be paying for server costs permanently. I would also need to have some sort of authentication layer to stop other people using “my” version of Word online, and seeing my files stored there. Instead, I want an environment that lets me start an application in the cloud, do whatever task I want in the application (create or edit a document), save the document, then close the application. I would only be hosting (in the sense of serving) the application as I used it, and then I would destroy it. Ideally, I would save the document I created somewhere persistent so that I could re-edit it using a newly started version of the editor at a later date. In terms of resource usage, this is how I see the differences between the traditional self-hosted application, a personal desktop application, and what we might terms a personal (hosted) application (which might also be a personal self-hosted application): In addition, I would expect to have privileged (authenticated access) to my personal applications. Unlike WordPress or Ghost, which run permanently and serve pages to the public as well as providing authenticated access to one or more (invited) users allowing them to edit posts, I would want to deny access to the site to anyone but me. This means that either the personal hosted application should be visible to the user from their dashboard, or via an authenticated URL (with some ports perhaps open to the public). Something like this maybe? Also, the public page might actually be an app specific authentication page (for example, a Jupyter notebook login page). Unlike permanently running self-hosted apps, the personal apps are temporary, and only run when when the user wants to use them. The linked storage is, however, persistent. The above architecture itself defines a generic self-hosted workbench environment, where the user can run applications on their workbench as personal applications as and when then need them (and hence only consume resources required to run them when they need them). One possible way of gaining the insight as to why this is useful is to consider the following: a domain of one’s own gives you a presence you own (for some definition of “own”…) somewhere on the web; a server of one’s own provides a server that lets you easily run your own services (which can often be a b*****d for a novice to install), which may include permanently running services that populate your domain. A personal application server of your own (or maybe a workbench of your own?) lets you easily run software applications for personal use that can be a b*****d to install if you have to build and install them from scratch yourself (as is the case with a lot of scientific software applications; in addition, the workbench of your own makes it easy to launch linked applications (e.g. a stats analysis application linked to a database server) using things like pre-prepared docker compose scripts.
OPCFW_CODE
Renamed packages aren't usable My full story lives there, the short story is that the name and package values are reversed when a package is renamed in a dependency FTR the extract from the Cargo Book, with matching permalink: In the git repo: { // Name of the dependency. // If the dependency is renamed from the original package name, // this is the new name. The original package name is stored in // the `package` field. "name": "rand", // The SemVer requirement for this dependency. // This must be a valid version requirement defined at // https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html. "req": "^0.6", // Array of features (as strings) enabled for this dependency. "features": ["i128_support"], // Boolean of whether or not this is an optional dependency. "optional": false, // Boolean of whether or not default features are enabled. "default_features": true, // The target platform for the dependency. // null if not a target dependency. // Otherwise, a string such as "cfg(windows)". "target": null, // The dependency kind. // "dev", "build", or "normal". // Note: this is a required field, but a small number of entries // exist in the crates.io index with either a missing or null // `kind` field due to implementation bugs. "kind": "normal", // The URL of the index of the registry where this dependency is // from as a string. If not specified or null, it is assumed the // dependency is in the current registry. "registry": null, // If the dependency is renamed, this is a string of the actual // package name. If not specified or null, this dependency is not // renamed. "package": null, } in the cargo publish payload: { // Name of the dependency. // If the dependency is renamed from the original package name, // this is the original name. The new package name is stored in // the `explicit_name_in_toml` field. "name": "rand", // The semver requirement for this dependency. "version_req": "^0.6", // Array of features (as strings) enabled for this dependency. "features": ["i128_support"], // Boolean of whether or not this is an optional dependency. "optional": false, // Boolean of whether or not default features are enabled. "default_features": true, // The target platform for the dependency. // null if not a target dependency. // Otherwise, a string such as "cfg(windows)". "target": null, // The dependency kind. // "dev", "build", or "normal". "kind": "normal", // The URL of the index of the registry where this dependency is // from as a string. If not specified or null, it is assumed the // dependency is in the current registry. "registry": null, // If the dependency is renamed, this is a string of the new // package name. If not specified or null, this dependency is not // renamed. "explicit_name_in_toml": null, } From looking at the payloads the confusion seems normal, both have a name field but the do not share the same meaning Closed in #40
GITHUB_ARCHIVE
How can I add the second string of an array to the first string of the same array, ruby I have an array of cities and states. Looks something like this: locations = ["Colorado Springs","CO","Denver","CO","Kissimmee","FL","Orlando", "FL"] I would ultimately like to get this result: locations = ["Colorado Springs, CO","CO","Denver, CO","CO","Kissimmee, FL","FL","Orlando, FL", "FL"] I did this to test: locations[0] << ", #{locations[1]}" And got this as a result: locations = ["Colorado Springs, CO", "CO", "Denver", "CO", "Kissimmee", "FL", "Orlando", "FL"] I am attempting the code below to convert the rest of the array but getting nil as a response: locations = ["Colorado Springs","CO","Denver","CO","Kissimmee","FL","Orlando", "FL"] counter0 = 0 counter1 = 1 while counter0 < locations.length locations[counter0] << locations[counter1] counter0 += 2 counter1 += 2 end => nil Why the mad rush to select an answer? No less, an answer that is incorrect. As I write this, you've change your selection to another incorrect answer. Look at the return values both give. They are not what you said you wanted in the question. Do not change your question! I suggest you retract the greenie and wait a couple of hours for the dust to clear, then make a selection. There is no rush to make a selection. Consider also that some readers may still be working on answers and others may not bother giving an answer because you've already made a selection. It is tricky to change the length of an array while iterating through it. You better avoid it. @sawa: Actually, the length of the array doesn't change. Only items within the array are modified. locations.each_slice(2).flat_map { |city, state| ["#{city}, #{state}", state] } #=> ["Colorado Springs, CO", "CO", "Denver, CO", "CO", # "Kissimmee, FL", "FL", "Orlando, FL", "FL"] The key is to use flat_map. locations.each_slice(2).flat_map{|x, y| [[x, y].join(", "), y]} # => ["Colorado Springs, CO", "CO", "Denver, CO", "CO", "Kissimmee, FL", "FL", "Orlando, FL", "FL"] You beat me by 14 seconds. Can we call it a tie? Yeah. I thought so. Plus, there is a minor difference. I find it a bit scary. Just the other day my wife said she thought I was becoming less forgiving of other's mistakes and at times was downright rude. I hope I haven't influenced you in a bad way. But, caring about the details is a good thing.
STACK_EXCHANGE
Megafaunal extinctions in the tropics The extinction of megafaunal populations during the Late Pleistocene and Holocene are a prominent part of discussions relating to the timing and nature of human impacts on environments, particularly in the context of the Anthropocene. This project seeks to bring together novel chronological, palaeoenvironmental, and zooarchaeological methodologies, from regions across the tropics, to better understand the role of Homo sapiens in the extinction of large mammalian taxa in the tropics. Questions of megafaunal (animals >44kg) extinctions during the Late Pleistocene and Holocene have been of extreme interest to the international archaeological and palaeontological communities, with it being variously argued that certain taxa were driven to extinction by human impacts, climatic change, disease, or even asteroid impacts. The potential impact of humans on various now extinct genera of large animals that once roamed the earth are linked to broader questions regarding the scale of our species’ impact on ecosystems. However, these discussions often neglect the species dynamics of entire ecologies, including smaller mammals and reptiles, and have generally been limited to Europe, the Americas, and Australia. This project is committed to undertaking studies of human impacts on animal populations in the less-studied regions of island East Africa, South Asia, Southeast Asia, and Melanesia. In particular, we seek to refine chronologies of faunal decline and extinction, as well as continuity, in these less-studied regions using detailed radiocarbon and optically-stimulated luminescence methodologies. We are also working to develop a consortium of specialists applying different novel methods to study changing demography, biology, and ecology of these taxa around the world. We are also interested in the comparison of faunal extinctions on mainland versus island ecosystems. Boivin, N.L., Zeder, M.A., Fuller, D.Q., Crowther, A., Larson, G., Erlandson, J.M., Denham, T. & M.D. Petraglia. 2016. Ecological consequences of human niche construction: examining long-term anthropogeniuc shaping of global species distributions. Proceedings of the National Academy of Sciences of the United States of America 113: 6388-6396. Roberts, P., Delson, E., Miracle, P., Ditchfield, P., Roberts, R.G., Jacobs, Z., Blinkhorn, J., Ciochon, R.L., Fleagle, J.G., Frost, S.R., Gilbert, C.C., Gunnell, G.F., Harrison, T., Korisettar, R., & M.D. Petraglia. 2014. Continuity of mammalian fauna over the last 200,000 y in the Indian subcontinent. Proceedings of the National Academy of Sciences of the United States of America 111(16): 5848-5853.
OPCFW_CODE
command not found When I try to run the command I get the following error: I'm currently on Pop!_OS 18.04 LTS, and my VSCode info is: Hey! I had this issue in an earlier 0.0.x versions, but it should work in versions 0.1.0 and above. Can you check which version of the plugin you have installed? VS Code System Information OS Information key value arch x64 platform linux type Linux release 4.15.0-34-generic EOL "\n" endianness LE Process Information key value arch x64 versions key value http_parser 2.7.0 node 8.9.3 v8 6.1.534.41 uv 1.15.0 zlib 1.2.11 ares 1.10.1-DEV modules 57 nghttp2 1.25.0 openssl 1.0.2n env key value LANG en_US.UTF-8 VS Code Information key value version 1.27.2 env key value appName Visual Studio Code language en extensions VSCode HackerTyper open in marketplace open in vscode key value id jevakallio.vscode-hacker-typer isActive true name vscode-hacker-typer version 0.1.0 displayName VSCode HackerTyper description Hacker Typer extension for looking cool while live coding publisher jevakallio categories ["Other"] That's odd! I'll have to investigate the root cause, Works On My Machine ™️ Haha yeah, I figured it was one of those situations. I'm not sure what OS you run, but maybe it's a linux issue? Thank you! Can you try to install v0.1.1 and try again? I noticed there was a casing issue in one of my imports, which will work on OSX case-insensitive file system, but will very likely fail on a case-sensitive fs. I haven't tried it on Linux, don't have any VMs on this machine, so if you could give it a shot and report back, that would be great! I have exact the same issue on macos mojave. Plugin was installed today Somehow I have a similar problem, with the "save macro" command. VS code system info: Version: 1.30.1 (user setup) Commit: dea8705087adb1b5e5ae1d9123278e178656186a Date: 2018-12-18T18:12:07.165Z Electron: 2.0.12 Chrome: 61.0.3163.100 Node.js: 8.9.3 V8: 6.1.534.41 OS: Windows_NT x64 6.1.7601 I will try later on today on a different PC (non-work PC) to see if that somehow makes a difference. Not running on my VSCODE https://github.com/jevakallio/vscode-hacker-typer/pull/23 fixed the issue for me. Using version 0.2.1 of nodename.vscode-hacker-typer-fork, I attempted to record a macro and got: Command 'HackerTyper: Record Macro' resulted in an error (command 'nodename.vscode-hacker-typer-fork.recordMacro' not found). I've tried restarting VSCode several times, as well as uninstalling/reinstalling this extension. VSCode info: Version: 1.44.2 (user setup) Commit: ff915844119ce9485abfe8aa9076ec76b5300ddd Date: 2020-04-16T16:36:23.138Z Electron: 7.1.11 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Windows_NT x64 10.0.18363 [Windows 10] Perhaps it would be best to discuss issues about the fork at https://github.com/nodename/vscode-hacker-typer/issues . I will say please have the status bar open (View -> Appearance -> Show Status Bar) while you reproduce this issue, and tell me at that issues page what you see. Thanks!
GITHUB_ARCHIVE
Are any descriptions of Rudra's aspect? Are any descriptions of Rudra's aspect ? If yes, where we can find them? what do you mean by aspect? Did you mean Saumya and Raudra aspect.../ Ghora-Aghora aspect...?... Any description of Rudra from the scriptures. For example does he has horns? does he wear tiger skin? etc.. There are 11 Rudras. About which Rudra are you talking? All of them...i am searching their aspects. @LuckyPashu if you are really interested in in-depth exploration and have the patience to read then please try "Sanatana Dhara Rudra Shiva across Vedas and Itihasa" in Google. Good luck. Read the shivopasana mantra of the Krishna yajurveda Mahanarayanaya Upanishad. It describes and covers the 5 primary aspects of shiva/rudra. Rudra/Shiva is of 5 aspects sadyojata, ghora/aghora, vamadeva, tatpurusha, ishaana. Sadyojata: source of all existence Vamadeva: the most beautiful and effulgent Aghora/Ghora: terrifying and non terrifying tatpurusha: the Supreme Personality/Purusha Ishaana: the supreme ruler and creator of all beings His specific form is described as the following: namo hiraNya-baahave hiraNya-varNaaya hiraNya-ruupaya hiraNya-pataye ambikaapataya umaa-pataye pashu pataye namo namah Namaha to the One who has golden hands ( hiraNya-baahave ) ; who is the golden hue or whose speech is charming ( varNa means colour or word ) ; who is of golden form ( ruupa ) or whose form is charming ; who is Lord ( pataye ) of wealth and gold ; who is the Lord of Mother Ambika and Lord of Uma and who is the Lord of all beings ( pashu : animals ) And from the rudram he is described as the following: om namaste astu bhagavan visvesvaraya mahadevaya tryambakaya tripurantakaya trikalagni-kalaya kalagnirudraya neelakanthaya mrutyunjayaya sarveshvaraya sadashivaya shrimanmahadevaya namah Om. Oh, Bhagavan, may this salutation be unto you who is the Lord of the universe, the great God, the three-eyed, the destroyer of demon Tripura, who is the Sandya time when three fires are lit, the Rudra that is the fire that consumes the universe, the blue-necked, the conqueror of death, the Lord of all, the ever-auspicious one. Salutations unto the glorious great Lord. Hope above helps sources are Shivopasana Mantra from Krishna Yajurveda and Sri Rudram.
STACK_EXCHANGE
[Open Design] The first draft of the Open Design Definition aymeric at kuri.mu Tue Mar 19 17:01:37 UTC 2013 Massimo Menichinelli said : > Yes, some elements will probably be joined together if not erased > from the definition. A typical aspects of the Open definitions is > that they focuses only on the knowledge items, and not on the > practice, the culture, the tools... so it won't cover everything, I > agree on this. > At the moment they are just placeholders for text or for reminding > some topics that we can discuss if they should be included and how. > But in any case we should focus a bit on the different kind of > design that is already using Open Source strategies: we already have > graphic and font design cases that are open for example, so they are > part of Design and therefore of Open Design. I think that we don't > have to go very deep in each field, the definition itself can be > forked so there maybe some derivative and more deep definitions (for > example an Open Fashion Design Definition based on the general Open > Design Definition). There is still a lot of time for developing I would keep it simple and generic so as to 1. make it adoptable by groups that have been already busy with the topic (you are taking the example of fashion, so following this train of thought, it could be nicer for a group such as the open wear project to naturally adopt and appropriate the open design definition instead of giving a ready to use local translation) and 2. limit the definition pollution (there is already license fragmentation). > There are some interesting points in your proposal, thanks! For > example about RoHS certification and so on, we should discuss if we > want to include some more values beside Openness. For example: do we > agree that open design of guns can be called Open Design? Is it only > a technical definition or does it have some other values like social > and environmental sustainability? My definition was really meant as a possible template to kickstart a discussion, and in that sense I added the RoHS bit having thought that you were already considering the ecological aspect of the project. Personally I am a bit split on the topic. I once suggested the idea of "Fair Trade Hardware" a couple of years ago in the Bricolabs network to address the issue of conflict minerals and the economical dark side of the semiconductor industry. But I doubt that the open design definition is a place to dictate a particular ethic or means of production. If we look at similar attempts, at the level of software licensing, we can see that it did not work very well beyond the value of such initiatives to highlight these very issues to a limited audience (ethical-GPL, copyfarleft, anti-evil free software, If Defense Distributed -- I guess you are referring to them with open design guns -- wants to use the open design logo if they are following the definition, then so be it. If you want to make a definition that is really in the lineage of free software and open source, then it really should be "for any purpose." For the record, the latter condition was not part of the original free software definition. RMS added it, as freedom "0" at a later stage, realising that it was an essential The open design definition, and the licenses it will depends on, are objects with different properties. Depending on their use and context, these properties, whether they are economical, political or social will produce different values. The same Linux kernel can take kids off the street when used for creative and empowering activities in a hackerspace, and help Google to bootstrap a mobile empire. > One more thing about requiring the use of Open Source Software or > Open Source Hardware for developing an Open Design project. > Unfortunately there are a lot of technologies and softwares that are > used and needed by designers (and in FabLabs as well) that are not > open source, if we require only open source tools there would be > very little of real Open Design. "real Open Design" That's the interesting bit here. If you want the definition to have an impact, then it should not describe a present limitation, but instead propose a direction for the future. In that sense I have no problem asking for a complete no nonsense free production pipeline. Or do we already need a fork for a free design definition? ;) More information about the opendesign
OPCFW_CODE
In the previous step we branched our data from main into a new denmark-lakes branch, and overwrote the lakes.parquet to hold solely information about lakes in Denmark. Now we’re going to commit that change (just like Git) and merge it back to main (just like git). Having make the change to the datafile in the denmark-lakes branch, we now want to commit it. There are various options for interacting with the lakeFS API, including the web interface, a Python client, and lakectl which is what we’ll use here. Run the following from a terminal window: docker exec lakefs \ lakectl commit lakefs://quickstart/denmark-lakes \ -m "Create a dataset of just the lakes in Denmark" You will get confirmation of the commit including its hash. Branch: lakefs://quickstart/denmark-lakes Commit for branch "denmark-lakes" completed. ID: ba6d71d0965fa5d97f309a17ce08ad006c0dde15f99c5ea0904d3ad3e765bd74 Message: Create a dataset of just the lakes in Denmark Timestamp: 2023-03-15 08:09:36 +0000 UTC Parents: 3384cd7cdc4a2cd5eb6249b52f0a709b49081668bb1574ce8f1ef2d956646816 With our change committed, it’s now time to merge it to back to the As above, we’ll use lakectl to do this too. The syntax just requires us to specify the source and target of the merge. Run this from a terminal window. docker exec lakefs \ lakectl merge \ lakefs://quickstart/denmark-lakes \ lakefs://quickstart/main We can confirm that this has worked by returning to the same object view of lakes.parquet as before and clicking on Execute to rerun the same query. You’ll see that the country row counts have changed, and only Denmark is left in the data: But…oh no! A slow chill creeps down your spine, and the bottom drops out of your stomach. What have you done! 😱 You were supposed to create a separate file of Denmark’s lakes - not replace the original one Is all lost? Will our hero overcome the obstacles? No, and yes respectively! Have no fear; lakeFS can revert changes. Tune in for the final part of the quickstart to see how.
OPCFW_CODE
How Sir Tristram and his fellowship came into the tournament of Lonazep; and of divers jousts and matters. BUT Sir Tristram was not so soon come into the place, but Sir Gawaine and Sir Galihodin went to King Arthur, and told him: That same green knight in the green harness with the white horse smote us two down, and six of our fellows this same day. Well, said Arthur. And then he called Sir Tristram and asked him what was his name. Sir, said Sir Tristram, ye shall hold me excused as at this time, for ye shall not wit my name. And there Sir Tristram returned and rode his way. I have marvel, said Arthur, that yonder knight will not tell me his name, but go thou, Griflet le Fise de Dieu, and pray him to speak with me betwixt us. Then Sir Griflet rode after him and overtook him, and said him that King Arthur prayed him for to speak with him secretly apart. Upon this covenant, said Sir Tristram, I will speak with him; that I will turn again so that ye will ensure me not to desire to hear my name. I shall undertake, said Sir Griflet, that he will not greatly desire it of you. So they rode together until they came to King Arthur. Fair sir, said King Arthur, what is the cause ye will not tell me your name? Sir, said Sir Tristram, without a cause I will not hide my name. Upon what party will ye hold? said King Arthur. Truly, my lord, said Sir Tristram, I wot not yet on what party I will be on, until I come to the field, and there as my heart giveth me, there will I hold; but to-morrow ye shall see and prove on what party I shall come. And therewithal he returned and went to his pavilions. And upon the morn they armed them all in green, and came into the field; and there young knights began to joust, and did many worshipful deeds. Then spake Gareth unto Sir Tristram, and prayed him to give him leave to break his spear, for him thought shame to bear his spear whole again. When Sir Tristram heard him say so he laughed, and said: I pray you do your best. Then Sir Gareth gat a spear and proffered to joust. That saw a knight that was nephew unto the King of the Hundred Knights; his name was Selises, and a good man of arms. So this knight Selises then dressed him unto Sir Gareth, and they two met together so hard that either smote other down, his horse
OPCFW_CODE
The summit was a two day event this time round, so there was lots of choice of great things to go and listen to. We kicked off with a keynote from Suan Yeo. Here are my highlights from his keynote speech :- - A lot of the content on the net is repurposed and re-imaged - how can you make the content say what you want it to say. - Start with why. - Do schools kill creativity. - Children are born inquisitive. - Sometimes you win, sometimes you learn. - Shift from students as consumers to students as creators. - The sign up rate for MOOCS is HUGE. - MOOC like reading the newspaper. - You pick what you want to learn when you want to learn it. - Ask questions that can’t be Googled.. - Flip the teacher - get the students to lead the class for the day. - Plan for the last mile not the first - Let technology do the hard work. - Computational thinking - problem solving - the steps are important not so much the answer. - Internet Explorer - the best browser for downloading a better browser ( !!! ) Then the sessions started and it was all on, finding the rooms and settling down to 8 sessions of Googly learning. Here are my highlights of the two days :- The sessions were hands on / having a go so my notes are short and bullet point style. Playlists - Curating YouTube with Jim Sill This was a great, useful session where I learned lots of really good stuff that I can apply and share. - You need an account for a playlist on YouTube. - When you subscribe to stuff it teaches google what to recommend. - You can click on these to stop them being shown - the three dots - but you can't get rid of recommended ones so use your personal account for the silly stuff, then it won’t show up on your work account for google to recommend from it. - Deleting the search history will delete all the things that guide the recommendations too. Watch on full screen - you loose all the controls. Large player - pushes all the other on screen stuff out of view - you can still use all the controls. Safety Mode :- - Down the bottom - it says safety mode - gets rid of comments from the screen. - Limits recommended videos to a certain type of rating. - links to what you are watching. - You can push safety mode out to all chrome books on the control panel. - for example - Sesame street video - all the recommendations are of the same TV rating of the show you are watching. Subscription - you can set it up to email you when new ones are added - but the normal setting will just include it on your feed when you log in. You can set the video up to play from a certain spot - this changes the url link to accommodate this setting. Add to - playlist. You can share playlists across users / sign ins Add to playlist or start a new playlist. - Public - anyone can find it - Unlisted - need the link can’t be searched for - Private - is private There is a button that says remove duplicates - useful When you play one from a playlist - it will continue on and play the rest in a loop. You can drag and drop the order that they are in. The share option on the playlist shares the whole playlist, not just one of the videos. Embed code - embeds the whole playlist. It does not show recommended videos at the end , it automatically goes onto the next one then shows a curly arrow to refresh. - Click on enable privacy enhanced mode to embed the playlist on a site. Hover over the vid in the playlist, for lots more options. - Can edit start and end times - trim up the front and back of video - doesn’t effect the vid on the web, just what appears in your playlist. - Can add notes / instructions - On the upload button you can record straight from the web cam. - You can record instructions in the playlist and mix it into required trimmed videos. Get students to snip and edit up playlists to show their understanding of a topic. - Use the filter drop down to guide searching. - Use a comma and add playlist to a search - you will only get playlists on that topic. The main starting points are :- - navigational structure - page layout - look and feel - colour, typeface etc.. Get to your content in no more than 2 clicks. - Why? get to important info quickly. - The more clicks the faster the user experience declines. Web pages are set up in tables. This creates a frame for the content - you usually don’t see this frame. 1200 px is a fairly good standard browser size for the width of your site. Have the tabs as category titles so this makes it more obvious there are drop downs under them. People will then find it natural to go to the drop downs to use them. - Layout - Keep it simple, keep it clean. - Page layout based on the content. What colour is your personality? Colour - emotion, temperature, symbolise. Create an account in colour lovers to save your colour schemes in. Take photos of products that have colours that inspire… use photoshop or pixlr to get the hex codes and create a colour palette. Serif / san serif serif - print san serif - web as there is more space Do not use Comic san!!! Tab titles - use a serif font as there is less space Being able to take tours and walks around galleries all round the world is an opportunity not to be missed. You can loose a LOT of time browsing though this project. You can’t expect students to collaborate if teachers don't model it - Synchronous - in real time - Asynchronous - out of time, the glue that holds collaboration together across all sorts of groups. Cooperation + Contribution = Collaboration - Contribute is the one that sometimes gets lost in the mix - + co creation - what are you going to do together? It is not always about you and your work, it is about the final thing that you are working on together. How are we supporting collaboration online? Potential for miscommunication - Can you use tools other than google docs for collaborative learning, to build on the comfort level issues of working only through online methods - skype?, chat?, forum? poll everywhere - free account takes first 40 answers What can you do in 2 steps that used to take 7? Have drive in grid view and colour the folders for instant recognition. Thumbnails of files. Autosave is every three to seven seconds Develop an efficient naming protocol for documents. Name, class, title - these become searchable terms. Put a header on the doc and use #name, class, title - this is then searchable too Create searchable identifiers My Session doc - contains lots of great pointers for using the research tool in Google docs. I have learned LOADS of great stuff over the 2 days and the plan now is to start breaking it down and working out how I can use it and how I can share it with others. Lots to do !!! But, we all need to :- BE MORE DOG!! Embrace the experience!Grab the frisbee!!
OPCFW_CODE
Joint models with time to progression Consider a RCT (Randomized Controlled Trial) which aims at assessing the efficacy of a drug in patients suffering from a given cancer. In this trial, $p$ individuals are observed at several time points. At a certain point in time, some individuals are given a placebo and others are given the drug. Assume (out of simplicity) that each patient only has one tumor. The efficacy of the treatment (for a given individual $i$) can be assessed by measuring the tumor size $M_i(t)$. Longitudinal modeling can be used to gain insights on the way the tumor size evolves over time. The true time of disease progression, denoted by $P_i$ $(1 \leq i \leq p$), can be defined as the earliest time at which some criteria is met. In the following, we define progression time to be the time at which $M_i(t) > c M_i(0)$ (for some $c > 1$). This progression time is usually not observed as it may occur between two consecutive visits. We can define $P_{i}^{\mathrm{obs}}$ the earliest visit at which the criteria is met. In addition to this, this progression time can be right-censored as individuals may dropout before progression is actually observed. Let $C_i$ denote the censoring time. We actually observe $T_i = \min(C_i, P_{i}^{\mathrm{obs}})$ and the tumor sizes at each visit $\mathbf{m}_i = (m_{i,j})_{1 \leq j \leq n_i}$. In this situation, the time-to-event is censored but the observations are censored too. An individual who reaches "progression" is removed from the RCT. Joint models allow to combine survival analysis (here, the "time-to-event" is the time to progression) and longitudinal data analysis (modeling of tumor growth). As presented in this book by D. Rizopoulos), a joint model consists of a Cox proportional hazard model for the survival part and a Linear Mixed Effects (LME) model for the longitudinal part. Given that, for each individual, the observations are: $(T_i, \delta_i, \mathbf{m}_i)$ with $\delta_i = \mathbb{1}_{P_{i}^{\mathrm{obs}} \leq C_i}$, one can write the likelihood $p(T_i, \delta_i, \mathbf{m}_i \mid \theta)$ as: $$ p(T_i, \delta_i, \mathbf{m}_i \mid \theta) = \int p(T_i, \delta_i, \mathbf{m}_i \mid \mathbf{b}_i, \theta) p(\mathbf{b}_i \mid \theta) \, d\mathbf{b}_i, $$ where $\mathbf{b}_i$ denote the random effects of the LME model and $\theta$ denote the model parameters. A key assumption is the following: conditionally on $\mathbf{b}_i$, the survival part and longitudinal part are independent. That is: $$ p(T_i, \delta_i, \mathbf{m}_i \mid \mathbf{b}_i, \theta) = p(T_i, \delta_i \mid \mathbf{b}_i, \theta) p(\mathbf{m}_i \mid \mathbf{b}_i, \theta). $$ I am wondering whether this assumption (the conditional independence) actually holds in the situation I presented above. In many examples [used to illustrate joint models], there is no explicit relationship between the progression of the measurement and the time-to-event. For instance, there is no explicit relationship between time to death and the number of CD4 cells in patients suffering from AIDS. Still, in the present case, the relationship between the time-to-event (progression) and the longitudinal trajectory (tumor size) is explicit. How can I include this explicit relationship between the time-to-event and the longitudinal trajectory in a joint model? More specifically, should the survival part of the joint model depend on the threshold $c$? It seems to me that the only information is in the tumor size process $M_i(t)$, $i = 1, \ldots, n$. Either you treat this as a longitudinal outcome evaluated at some follow-up times $t_{ij}$, $j = 1, \ldots, n_i$, and you fit an appropriate mixed effects model describing the average longitudinal evolutions; or you are interested in the time until $M_i(t) > c M_i(0)$ that should be interval censored data, and you fit an appropriate survival model for it. It is not evident why you want to consider the same process twice in a joint model. At the point of model convergence, you will have (Pearson / Schoenfeld) residuals for the mixed and Cox models. Plot the residuals against each other. Inspect any possible trend. If the residuals show a trend, there are likely time-dependent effects of treatment on tumor size/status and a more sophisticated treatment should be handled such as time varying covariates. I think I understand your question better. Basically, you are concerned because the time-to-PD is "explicitly" (completely) dependent on the tumor size, that simply adjusting for treatment assignment doesn't work, right? Are you using RECIST to measure PD? If so, then the only case that PD isn't a function of target or non-target lesion growth is the appearance of new lesions. Fit that as a separate event time. Does that answer your question? Yes, time-to-PD depends on the tumor size. Indeed, "progression" is defined as the time at which the tumor becomes "too big". In a joint model, the survival part assumes that: $$ h_i(t \mid M_i(t), w_i) = h_0(t) \exp\left( \gamma^{\top}w_i + \alpha m_i(t) \right), $$ and $$ S_i(t) = \mathbb{P}\left( P_i^{\mathrm{obs}} > t \right) = \exp\left( \int_{0}^{+\infty} h_i(s) , ds \right), $$ with $w_i$ some baseline covariates. I was under the impression that this could be modified to make the relationship between time-to-pd and tumor size explicit. @Pouteri could you clarify if disease progression is RECIST or not? It could be another progression criteria than RECIST. We just assume that the progression time can be explicitly obtained from the time-varying covariates (RECIST is an example). @Pouteri basically, response assessment (change in tumor volume or area from baseline) is precisely what is used to determine whether PD occurred. So conditional on the longitudinal volume of a single-lesion tumor, there is 0 information added by the designation of "PD". I'm not sure what you mean by "there is 0 information added...". To formulate my question differently: I am surprised that the constant $c$ (which defines the progression threshold as $M_i(t) > c M_i(0)$) does not appear in the joint model formulation. Intuitively, I would have expected the hazard function $h_i$ to depend on it. Let us continue this discussion in chat.
STACK_EXCHANGE
The Business Process Modeling Notation (BPMN) is a standardized notation for modeling business processes. We are currently developing a precise execution semantics for BPMN, using graph rewrite rules. In addition to that we developed two transformations. We developed one transformation from BPMN to the workflow system YAWL, which allows BPMN models to be executed in workflow. We developed another transformation from BPMN to the Petri net formalism, which allows BPMN models to be formally analyzed. We are in the process of defining a complete formalization of the BPMN 2.0 execution semantics, using graph rewrite rules. The benefit of formalizing the execution semantics by means of graph rewrite rules is that there is a strong relation between the execution semantics rules that are informally specified in the BPMN 2.0 standard and their formalization. This makes it easy both to understand and to validate the formalization. We also implemented the formalization in a tool called GrGen. Having a formalized and implemented execution semantics in terms of graph rewrite rules supports simulation, animation and execution of BPMN 2.0 models. In particular we aim to use the formal execution semantics to verify workflow engines and service orchestration and choreography engines that use BPMN 2.0 for modeling the processes that they execute. The current version of the formal execution semantics supports most of the BPMN control-flow elements. It is described in this technical report. The implementation of the execution semantics is available through a virtual machine that is made available here. We developed a series of screencasts to explain how the execution semantics works and how it can be accessed. These screencasts can be found here. A graphical user interface to the execution semantics, along with some example BPMN models can be found here. The BPMN to YAWL transformer can transform BPMN models into YAWL models. The BPMN models must be constructed with the STP BPMN modeler. Both the transformer and the modeler are Eclipse plugins and must be installed with Eclipse (see below). YAWL is a workflow engine. The result of the transformation can be executed directly by YAWL, it can be opened in the YAWL editor and it can be opened in ProM for analysis. The BPMN to Petri net transformer can transform BPMN models to Petri net models. It implements the theory presented in this technical report. The BPMN models can be constructed using the ILOG BPMN Modeler. The following figure shows a screenshot of an example BPMN model in the ILOG BPMN Modeler. The resulting Petri net models can be opened in ProM. This tool will automatically lay-out the models and allows various forms of analysis, such as deadlock and livelock checks, to be performed on the Petri net. The following figure shows what the transformed model from the previous figure looks like in ProM. The tool can be started from the command prompt. Starting it without arguments, will show the acceptable arguments. The example BPMN files from this technical report are included in the zip file. The behaviour from the figures above can be transformed by typing:
OPCFW_CODE
It’s been a long time since I’ve been thinking of writing some posts about packers/crypters/protectors. I’m not sure how many I’ll write; it will probably depend on the interest of the audience. What I do know, is that I’ll try and follow the blog’s philosophy so we’ll go bottom up, explaining the basic concepts or pointing out to the best references when I deem it appropriate. Packers, Crypters, Protectors The first time one tries to look into this topic, one comes across these different names for what at first glance seem to be pretty much the same. These terms are nowadays somewhat mixed, but I think the following definition won’t harm and might shed some light for the inexperienced: - A Packer‘s main goal is to reduce the executable size using compression algorithms. (e.g. UPX) - A Crypter‘s main goal is to encrypt the executable, hindering the disassembly process. (e.g. EasyCrypter) - A Protector‘s main goal is to make more difficult the task of debugging an executable using anti debugging techniques. (e.g. Yoda’s Protector) - An Hybrid combines two or more of the above characteristics (e.g. Crypter) The categorization problem should be obvious now, since many existing tools combine more than one of the above attributes . The confusion as to why some people consider a pure packer to be a protection against reverse engineering may come from the fact that all of the above tools modify the Original Entry Point (OEP) of the executable and modify the Import Address Table (IAT) either compressing, encrypting or protecting it. For a better understanding of why this is a bother when reversing, it’s key to realize that one of the first steps when starting the reverse engineering process of a program is to locate the OEP as well as function calls and common API references. Since these tools compress or encrypt the executable code and the IAT, the reverse engineer cannot locate those APIs until the unpacking has taken place. It should suffice to say that knowledge is always valuable, but for those of you wondering about the practicality of learning how to unpack a packed binary or how to create your own simple packer/crypter I’ll make a case. From a penetration tester perspective, knowing how to create your own packer/crypter may come in handy at situations where you need to bypass antivirus software in order to achieve code execution on your target. This has always been one of the main goals of crypters, and they’re heavily used by malware. From a reverse engineer perspective or binary auditor, you’ll come across many samples that are in fact packed. For the most common cases, there are automated tools that would be able to unpack the binary for you. Nevertheless, for new or unknown packers you’ll be on your own, and manually unpacking them will be the only way to go. If you’re a developer, you may want to know more about protectors/crypters in order to prevent unwanted eyes to pry on your application; many commercial applications make use of these kind of tools to keep the crackers at bay. Introductory Example (Unpacking UPX) Before giving a brief overview on theoretical concepts, I think I could show a manual (sort of) unpacking of the very well known UPX packer. The program we’re using for this example is Windows’ notepad.exe. First, we’ve downloaded UPX (from here) ,and we’ve packed notepad.exe C:\Documents and Settings\adrian\Desktop\Unpacking\UPX\upx308w>upx.exe -o ..\notepad_UPX.exe ..\notepad.exe That should have produced a file called notepad_UPX.exe which we’ll use for the demonstration. It might worth our while to stop now and try to identify any obvious differences between the original and the packed binaries. Looks like UPX did a good job, it reduced the notepad.exe size from 69K to 48K by means of compression. Now let’s look at some PE tools to spot some other differences. First we’ll run RDG Packer Detector (download here) on both binaries, and the result is below: As we can see RDG says that original notepad.exe has been developed with Microsoft Visual C++ 7.0 and that the packed version has been packed with UPX. This was the expected result, since UPX is a very notorious packer and it’s been around for a long time. Another thing we can observe is the section table; we’ll use PEiD for that (download here). OK, so we have three sections (.text, .data, .rsrc) and everything looks normal. Besides that, we can see the entry point located at offset 0x73d9. That’s some differences there. We still have three sections, but the names have changed from .text and .data to UPX0 and UPX1. That’s no putting too much effort into concealing the packer, not that it was UPX’s goal anyway. The Entry Point has changed as well, now it points to the start of the unpacking routine (within UPx1). It’s also interesting the fact that the Raw Size of UPX0 is exactly 0 bytes, ain’t that weird? That causes two sections to have the same Raw Offset (0x400). These kind of things are a strong indicator that the executable has been through some kind of manipulation. So now, let’s go and try to unpack the notepad.exe file that we’ve packed with UPX before. First we open the executable in Immunity Debugger to see a common sign of packed executables. OK, so now the EIP should be sitting on the first instruction of our program, that as we know now, it’s the first instruction of the unpacking routine. Many packers, including UPX, start saving the processor state (registers) with a PUSHAD instruction. These are the looks of that: We’re gonna do some cheating for the sake of the explanation here. We’ll take a look at the OEP within the debugger at the moment, right after loading the executable. After the unpacking takes place, we know (for we have the original file to compare) that the execution will jump to 0x0100739D (the OEP). But before the unpacking takes place, this is what lies at that address: That’s certainly not good code, more like a bunch of zeroes. Now the little trick for the simple UPX. Since the packer starts saving the registers as we saw above, we can expect these registers to be restored right before the execution of the original notepad code starts. Thus, we’ll set a breakpoint on those registers to figure out when are they restored. Remember that our goal is to figure out the OEP. We’ll do as follows: - Single step over the PUSHAD instruction (hit F7) - Right click on ESP and click “Follow in Dump”. You should be seeing the values of the registers in the dump window right now. - Select the values of one of the saved registers in the dump window and set a hardware breakpoint, on access. - Resume execution (hit F9). If lucky, we should be stopping at the breakpoint we set in the previous step. A few instructions after where execution is stopped there should be a JMP, that will lead us to (oh surprise) the unpacked code of notepad.exe, and thus the OEP. Single step from that JMP just once and you’ll land into the OEP at 0x0100739D (as we already knew). Don’t keep stepping for now. We’ll make use of OllyDumpEx (download here), a plugin for both Immunity Debugger and OllyDbg that dumps the process to disk. Now that the process is unpacked in memory, we can dump this to disk, creating an unpacked executable. That executable won’t run just like so, we’ll need to to a little bit of work on it, but for now, let’s dump it. The options you have to select on the OllyDumpEx window are displayed below. Click on Dump and save it with the name of your choice (in my case notepad_UPX_dump.exe). At this point, if we try to run the dumped binary it will display an error message, in other words, it won’t run. As disappointing as that might be, it has a rational explanation, and that is because the IAT needs to be repaired. We’ll talk about the IAT and how to repair it manually coming up in future post entries. For now, it suffices to say that our dumped binary has no clue where to obtain the addresses of the API functions it requires to execute properly. Many times, there is no need to repair the IAT manually and we can rely on ImpREC tool (download here) to do that for us. What we’re doing now to fix the IAT is: - Run the packed version of notepad (notepad_UPX.exe) and leave it there. - Run ImportREC - In the dropdown menu, locate your notepad_UPX.exe process. - Then modify the OEP box at the bottom to point to the OEP we’ve previously found, without the base address (i.e. just the offset). In this case 0000739D. - Click on “Get Imports” The ImpREC window should look like the picture below. Now hit on “Fix Dump”, select the previously dumped file (notepad_UPX_dump.exe) and click OK. That’s it, ImpREC should have fixed the IAT of our process and created a new file called “notepad_UPX_dump_.exe” (note trailing underscore). You can try and run it now, if you followed all the steps, the notepad window should open as expected. Finally we have an unpacked version of notepad that we can now reverse engineer at will. We’ve seen a little bit about packers and we’ve shown a quick and very easy example on how to unpack UPX to whet the appetite. There are many more things to do with this, there are many more topics to cover as it gets more challenging and interesting at once. Hopefully we’ll cover more of that in subsequent entries, till then If you have any questions, please leave a comment. Take care!
OPCFW_CODE
Security within Databricks SQL requires administrators to configure access to S3 storage through an instance profile and data object owners to configure fine-grained access using Databricks table access control. Instance profiles allow you to access your data from SQL endpoints without the need to manage, deploy, or rotate AWS keys. Databricks table access control is an expressive, cloud agnostic, and fine grained security model that provides end-to-end security on your data lake with auditability. Table access control allows setting fine-grained row and column level permissions using SQL GRANT statements. It is an open standard familiar to database and data warehouse users and allows data owners in each department to delegate data access without the need for complex cloud access control configuration. This article gives an overview of table access control, provides the basic steps to configure table access control, and shows how to implement common patterns for granting access to data objects. It explains how to use credential passthrough for legacy implementations. In this section: Table access control enables you to secure the following objects: CATALOG: controls access to the entire data catalog. DATABASE: controls access to a database. TABLE: controls access to a managed or external table. VIEW: controls access to SQL views. ANY FILE: controls access to the underlying filesystem. Users granted access to ANY FILEcan bypass the restrictions put on the catalog, databases, tables, and views by reading from the filesystem directly. Only Databricks administrators and object owners can grant access to securable objects. A user who creates a database, table, or view in Databricks SQL or using a cluster enabled for table access control becomes its owner. The owner is granted all privileges and can grant privileges to other users. If an object does not have an owner, an administrator can set object ownership. The following table summarizes the available roles and the objects for which each role can grant privileges. |Role||Can grant access privileges for| |Databricks administrator||All objects in the catalog and the underlying filesystem.| |Catalog owner||All objects in the catalog.| |Database owner||All objects in the database.| |Table owner||Only the table.| For more information, see Data object privileges. This section describes the recommended steps for configuring table access control. It describes when steps are required or optional and the environments in which the steps are performed. In this section: - Databricks account on the Premium plan. - Databricks workspace on the E2 version of the Databricks platform. For information about creating E2 workspaces, see Create and manage workspaces using the account console. All new Databricks accounts and most existing accounts are now E2. If you are not sure which account type you have, contact your Databricks representative. - Administrator has the Databricks SQL entitlement. To grant the Databricks SQL entitlement: - In the Databricks Data Science & Engineering workspace, go to the admin console. - Click the Users tab. - In the row for your account, click the Databricks SQL access checkbox. - Click Confirm. An administrator performs these steps in your IdP (if you use group synchronization) and the Data Science & Engineering workspace admin console. This step is optional. A Databricks administrator performs this step in a notebook in the Data Science & Engineering workspace. Administrators set owners using ALTER statements. To programmatically generate the ALTER statements required to change object ownership, an administrator can run the following notebook on a Databricks cluster enabled with table access control. The notebook queries the metastore for a set of databases and generates the ALTER commands to assign ownership to the databases and the tables contained in the databases. The simplest option is to set the owner to a group of admins. Alternatively, to enable a delegated security model, you can select different owners for each database, giving each the ability to manage permissions on the objects in the database. An administrator performs these steps in AWS Console, the Data Science & Engineering workspace admin console, and the Databricks SQL admin settings. For any data you want to be queried in Databricks SQL, an administrator must: Configure an instance profile that grants access to the underlying storage. Databricks SQL requires one instance profile with access to any data to be queried across all SQL endpoints, whereas in the Databricks Data Science & Engineering workspace, it is common to have several instance profiles, each with partial permissions. If you have an instance profile that provides global access already registered in Databricks you can reuse it. Register that credential in Databricks SQL. For details on both steps, see Configure an instance profile. An administrator performs these steps in the Databricks SQL query editor. Databricks SQL administrators and object owners use SQL statements to define access to datasets. This requires all datasets to be registered as tables in the metastore. You can skip this step if you have already created tables in your metastore. However, if the tables were defined using the Hive syntax, you must recreate them. Start a SQL endpoint. Run a query in the Databricks SQL query editor to create a table you want users to be able to query. Example commands to be issued by an administrator user (or any user with CREATE DATABASE sales; CREATE TABLE sales.purchases LOCATION "s3://mys3bucket/mytable"; Data object owners perform this step in the Databricks SQL query editor. Data object owners grant privileges to users or groups by issuing GRANT statements. There are several ways to do this depending on the desired complexity of the permissions structure. Databricks recommends you use the groups defined in Step 1. For each group of users, assign permissions to objects. It is common to do this at the database level. This could be as simple as an administrator or owner issuing the following command in Databricks SQL: GRANT USAGE, SELECT, READ_METADATA ON DATABASE sales TO `analysts`; This command gives read access to the analystgroup on the salesdatabase. Privileges are inherited, so granting read permission on the database allows read access to all the tables and views stored in the database, including any future objects added to the database. For a detailed explanation of the privileges that can be granted to users and groups, see Privileges. For common patterns in setting up permissions, see Common patterns. (Optional, but recommended). It is common to set up private user storage and team storage, which allow users to create their own tables in a sandbox area in which only they (or their team) have access. The following example creates a database called user1can write data to. CREATE DATABASE IF NOT EXISTS user1_sandbox LOCATION "s3a://mybucket/home/user1"; GRANT CREATE, USAGE ON DATABASE user1_sandbox TO `firstname.lastname@example.org`; This command purposefully gives USAGEpermission but does not make user1the owner of the database. This allows user1to read and write objects in the user1_sandboxdatabase, but critically user1cannot grant other users access to them, which could be used to circumvent access controls. This section describes common patterns for granting access to data objects. In this section: To grant all Databricks SQL users read-only access to all objects registered in the metastore, an administrator issues the following command: GRANT USAGE, SELECT, READ_METADATA ON CATALOG TO users; Often administrators and users are accustomed to working with data access permissions at a group level. In the Databricks Data Science & Engineering workspace, this is typically achieved using an instance profile associated with a cluster that is scoped to allow only a particular group to attach to. To use this pattern, administrators perform the following steps: For each cluster in the Data Science & Engineering workspace, note the set of users allowed to access the cluster (ideally qualified by the use of a group). Examine the credentials on the cluster to determine the levels of data access that should be granted to the group for each database. GRANTstatements, typically on the database: GRANT USAGE, SELECT, READ_METADATA ON DATABASE telemetry TO `data_science`; Any object added to this database will be accessible to the group. To allow data sharing within the same team, you can implement team sandboxes: CREATE DATABASE IF NOT EXISTS team1_sandbox LOCATION "s3a://mybucket/home/team1"; GRANT CREATE, SELECT, USAGE, READ_METADATA ON DATABASE team1_sandbox TO `team1`; team1 is a group defined in Set up group synchronization or create groups. The team can safely share data in team1_sandbox without the ability to share data outside the team. Databricks includes two functions— is_member—that allow you to express column- and row-level permissions dynamically in the body of a view definition. These functions let you implement the following use cases: - Column-level permissions - Row-level permissions - Data masking For details, see Dynamic view functions. Databricks supports credential passthrough to control access to cloud storage for limited access patterns. Credential passthrough is a vendor-specific implementation that allows user identity to be passed through to the cloud service storage provider which then verifies permissions on the files themselves. Credential passthrough allows you to authenticate automatically to S3 buckets from Databricks compute resources using the identity that you use to log in to Databricks. Credential passthrough has two limitations: - It does not provide fine grained—column or row level—security and as a result can be used only on direct file access. - Users with passthrough privilege can bypass the restrictions put on tables by reading from the filesystem directly. It is thus considered to be a legacy approach. Databricks recommends you choose other solutions when available. There are two modes for accessing data using passthrough: You can access path-based tables directly; Databricks automatically passes your user identity through as the account used to access the file. This works for direct path-based table access as follows: SELECT * FROM delta.`s3:/.../myfolder` This access pattern requires you to access files directly rather than use tables registered in the metastore. Thus you must know the explicit location of the data in the object store without the benefit of the schema browser. This method does not require ANY FILE permission. To have a “cataloged” version of a passthrough table, use can use views. With this method however you have the burden of partition updates, schema drift, and keeping the view definition up to date. CREATE VIEW v AS SELECT * FROM delta.`s3:/.../myfolder` Views that use passthrough on path-based tables are not fully supported by all data types and formats.
OPCFW_CODE
Every time dbt Cloud runs a project, it generates and stores information about the project. The metadata includes details about your project’s models, sources, and other nodes along with their execution results. With the dbt Cloud Discovery API, you can query this comprehensive information to gain a better understanding of your DAG and the data it produces. By leveraging the metadata in dbt Cloud, you can create systems for data monitoring and alerting, lineage exploration, and automated reporting. This can help you improve data discovery, data quality, and pipeline operations within your organization. You can access the Discovery API through ad hoc queries, custom applications, a wide range of partner ecosystem integrations (like BI/analytics, catalog and governance, and quality and observability), and by using dbt Cloud features like model timing and dashboard status tiles. You can query the dbt Cloud metadata: - At the environment level for both the latest state (use the environmentendpoint) and historical run results (use modelByEnvironment) of a dbt Cloud project in production. - At the job level for results on a specific dbt Cloud job run for a given resource type, like The Discovery API is currently available in Public Preview for dbt Cloud accounts on a Team or Enterprise plan. It’s available to all multi-tenant and to only select single-tenant accounts (please ask your account team to confirm). Preview features are stable and can be considered for production deployments, but there might still be some planned additions and modifications to product behavior before moving to General Availability. For details, refer to dbt Product lifecycles. What you can use the Discovery API for Click the tabs below to learn more about the API's use cases, the analysis you can do, and the results you can achieve by integrating with it. To use the API directly or integrate your tool with it, refer to Uses case and examples for detailed information. Use the API to look at historical information like model build time to determine the health of your dbt projects. Finding inefficiencies in orchestration configurations can help decrease infrastructure costs and improve timeliness. To learn more about how to do this, refer to Performance. You can use, for example, the model timing tab to help identify and optimize bottlenecks in model builds: Use the API to determine if the data is accurate and up-to-date by monitoring test failures, source freshness, and run status. Accurate and reliable information is valuable for analytics, decisions, and monitoring to help prevent your organization from making bad decisions. To learn more about this, refer to Quality. When used with webhooks, it can also help with detecting, investigating, and alerting issues. Use the API to find and understand dbt assets in integrated tools using information like model and metric definitions, and column information. For more details, refer to Discovery. Data producers must manage and organize data for stakeholders, while data consumers need to quickly and confidently analyze data on a large scale to make informed decisions that improve business outcomes and reduce organizational overhead. The API is useful for discovery data experiences in catalogs, analytics, apps, and machine learning (ML) tools. It can help you understand the origin and meaning of datasets for your analysis. Use the API to review who developed the models and who uses them to help establish standard practices for better governance. For more details, refer to Governance. Use the API to review dataset changes and uses by examining exposures, lineage, and dependencies. From the investigation, you can learn how to define and build more effective dbt projects. For more details, refer to Development. Types of project state There are two types of project state at the environment level that you can query the results of: - Definition — The logical state of a dbt project’s resources that update when the project is changed. - Applied — The output of successful dbt DAG execution that creates or describes the state of the database (for example: dbt test, source freshness, and so on) These states allow you to easily examine the difference between a model’s definition and its applied state so you can get answers to questions like, did the model run? or did the run fail? Applied models exist as a table/view in the data platform given their most recent successful run.
OPCFW_CODE
Bootstrap is popular for a reason. But, if you think there was nothing better, take a look at Propeller. Bootstrap and Material Design Are no Unity has been around since 2011. Developed at Twitter, it made an effort to advance the approach of mobile first in web design . It can't be denied that it was successful. Bootstrap took off right off the bat, was adopted in no time, and is still considered the default framework for responsive web design . Bootstrap can even be found in most responsive WordPress themes Of course, Bootstrap has its own ideas of what a website should look like. Though it should be mentioned that it's not so intrusive that you couldn't replace every single one of these suggestions with one of your own. [caption id="attachment_103679" align="aligncenter" width="1024"] Propeller: Landing Page. (Screenshot: Noupe)[/caption] However, if you already know that you want your website to be based on Google's design language, Material Design, you can make this work much easier . The Digicorp team went ahead and united Bootstrap and Material Design Propeller Creates the Unity Propeller is a Bootstrap in Material Design , which is ready to be used out of the box. In 2017, Propeller was one of the most popular front-end frameworks. It is available for free download under the liberal MIT-license on its own website , but also on Github Propeller doesn't have major demands and does not need to be used to its full extent, so you can also just benefit from individual components . Thus, each of the individual components is available for download as well. If you want quick results, you may be interested in the ready-made Propeller themes , although they are charged for the most part. [caption id="attachment_103680" align="aligncenter" width="800"] Premade Theme Based on Propeller. (Screenshot: Digicorp)[/caption] If you have a Bootstrap-based website already, Propeller offers the separate download of the files required solely for the theming . Here, you have to pay attention to the file and folder structure, though. A detailed documentation helps with that. In the end, the result is identical to the result of using Propeller. Propeller comes with 25 components , which are presented in detail, including a comparison to the default Bootstrap component. Five additional components are supplied by third parties Currently, Propeller is based on Bootstrap 3. However, the switch is on the roadmap. While I have yet to use it in a project, Propeller is a part of my toolbox. Definitely be sure to check it out. Do you think it would be suitable for a mobile app design based on cordova and html 5? Yes, Propeller has been used successfully to build mobile apps using Meteor and Ionic. Framework is flexible enough to suits the needs.
OPCFW_CODE
Increase in Yield of products in an exothermic reaction Given an exothermic reaction $\ce{N2 + 3H2 -> 2NH3}$ (which was initially in equilibrium). The temperature is then increasing with time. I am supposed to predict the yield of ammonia with time. According to an exothermic reaction, the equilibrium constant K is supposed to decrease with increase in temperature. But it is found that the yield increases initially before decreasing continuously as expected. What causes the initial increase in the yield? I suppose that the heat overcoming the activation energy could provide an explanation, but is there any other concept which is not fully reliant on the kinetics of the reaction? Can you provide the reaction? Reaction: N2 + H2 -> NH3 @A.K Can you answer now? I'm still not clear with this question. No your question is on hold you should rephrase it and elaborate to show more effort to understand the material as the hold states. @A.K. Is this fine now? I'm sorry again about the phrasing. It may sound like that of a homework question, Look, you say: "which was initially in equilibrium". Then the temperature are stable. In the reaction you have four mols in the reactants and 2 mols in the products. When you increase the temperature the system pressure increase too. So to increase the stability the system "walk" for the reactants, because this decrease the pressure (2 mols in products and 4 mols in reactants). If you remember the gas ideal law, the pressure is directly proportional to the mol numbers. So this is the reason because the temperature favor the products. Thank you for commenting on this post. How can you conclude that the pressure increases? No constraint is given to the volume of the system. And I was asking why the yield increases initially, which I suppose Le Chatelier's principle cannot explain. The reactants and products are gases, if the system stay open they will travel across the Universe. :) So, the system must be closed. If the system is closed a increase in the temperature increase the pressure. So the system will choose a condition where the gas mol number be smaller. What about variance of K with temperature? How can you say that yield increases? The yield is supposed to decrease for a reaction where heat is released. I am confused. Thanks again. For example, let's think that K = 1 (1 atm) when you increase the temperature the reaction goes to a new equilibrium point. Using PV = nRT (R and V are constant). If the temperature increase, P increase and n decrease. If the n decrease that means the reaction will go to products (because in the products the mol number of gas is smaller in relationship to the reactants). Now, at the new temperature we have a new K, the equilibium was displaced. The must be K' > 1. Oh, and the Le Chatelier's principle can explain this situation. The "system response" to temperature increase is: displace the equilibrium to te products for decrease the pressure. There also exists a relation between K1 and K2 at 2 different temperatures, that is the Vant Hoff equation. What about the enthalpy of the reaction? It's also a factor, which does not associate with Le Chatelier's principle. I think simply using the ideal gas law will not work in this situation. To answer your question in short, according to Le Chatelier's Principle the endothermic(reverse) reaction will be favored. So from the equation, it is clear that the reverse reaction produces a greater number of moles as compared to the forward one. Therefore because the amount of moles of a compound is directly proportional to the mass of that compound then the reverse reaction will produce more yield that the forward reaction which will produce less yield.Thus less ammonia will be produced. Hope this helps.
STACK_EXCHANGE
This is a brief summary of best practices to adopt when testing a developer's skills. Tests should be as similar as possible to real-work activity. Have the developer work on real issues with a member of your team is preferable to timed questions and quizzes. It's likely that, if you use the tests listed below as to avoid, you actually get a bad developer or one who has never actually written code. The candidate works with a team member to find and fix a bug. The candidate will be conducting the activity. The candidate works on an issue or function to implement with a team member. A team member talks with the candidate about an actual problem the team has and asks for input about it. Ask the candidate technical details about an experience on their CV. The candidate is assigned a small project lasting 1-2 days. It has to be something you find in a real working situation. It may be the implementation of a React component that can be used in a real project. Candidates work on their own. The project in this case takes a week to be completed. It's paid work. The project will implement a function that will be actually used for development or even in production. A team member reviews the candidate's GitHub account and talks with the candidate about the code she finds there. A team member reviews articles the candidate has written. They can be topic of conversation as well. The candidate has to answer theoretical questions about languages. No actual developer answers theoretical questions when doing actual work. There is a small function to write and you have to remember instruction syntax by heart. You don't do this when actually working. You use tools that fetch syntax for you so that you don't have to remember it and can concentrate on making the solution work. When actually working the most important activity you perform is not to quickly write a function, but to evaluate many solutions to find the one that is easier to maintain and less likely to cause problems later. You are unprofessional if you stress your mind to remember syntax instead of using the many tools available that remember syntax for you. These are quizzes about theoretical aspects of a language. These are of no use when doing actual work. You have the candidate write code on a whiteboard whilst answering questions. Nobody writes code on a whiteboard when actually working. You have to solve puzzles that have nothing to do with software development. You never do this when writing software. You have to write an algorithm you could find in a standard library like a quick search algorithm. You never do this when building a real application. You fetch a ready-to-use library. Your job is to evaluate the library to see if it's easy to use, how many bugs it carries, if it's supported and popular. You would be considered unprofessional if you wanted to reinvent the wheel every time you need a well-known algorithm.
OPCFW_CODE
2020 FACT Fellows Title: An integrated modeling approach to evaluate the effects of dam removal on river corridors through the lens of the watershed Over 1000 dams have been removed in the US during the last 30 years and the number is predicted to increase in the future due to ecological and economic benefits. My Ph.D. project aims to develop a methodology that uses numerical modeling, data analytics, and remote sensing to quantify the impact of multiple dam removals on a watershed scale. Currently, we are testing the methodology for the well documented Elwha river dam removals, and then intend to use Google Earth Engine to simulate the long-term sediment pulse propagation after dam removal for a data limited watershed. The cyber training program will enable me to gain skills required to implement remote sensing imagery and data analysis using HPC for my dissertation, but more importantly promote research transferability to current graduate students via graduate courses and departmental graduate student association seminars, and research reproducibility using github. Eventually, I would like to use a single high-performance platform to execute the integrated approach for modeling different natural systems. Title: Using FAIR principles to improve the usability of a food-energy-water system simulation model Water, energy, and agricultural systems are inextricably linked, especially in water-stressed regions like California. Decision-makers within this system are faced with significant uncertainty, from hydrology and climate to future economic conditions and regulatory environment. The California Food-Energy-Water System (CALFEWS) is a new open-sourced, Python-based simulation model that captures the multi-scale, multi-sector dynamics of water supply in California, including conjunctive use of surface water and groundwater in the Central Valley. I plan to use the tools learned at the FACT workshop to improve the findability, accessibility, interoperability, and reusability of the CALFEWS model. This will include the archiving of data, the improvement of project organization and metadata, and the development of a tutorial for downloading and running the model and analyzing the output data. I also plan to develop a teaching module on FAIR data and open-source coding principles to be used in an introductory course on programming for environmental research. Title: Python Tools for the Use of High-Performance Computational Resources in the Context of sUAS Remote Sensing with Application in Evapotranspiration Evapotranspiration (ET) is a key component that needs to be routinely monitored to track the dynamic change in the water cycle and water demand. One model used to estimate ET is the Two-Source Energy Balance (TSEB) model, implemented in Python language. However, executing this model at a local computer constitutes an issue due to large data files. On the other hand, for results validation, 2D flux footprint models for Eddy Covariance (EC) flux systems are used. These models are implemented in MATLAB code, which results are then brought into an ArcGIS environment for data aggregation with a final comparison of surface energy fluxes in Excel files, making the production and validation processes of sUAS ET complicated. The overall objective of this proposed research is to develop tools in Python language that can take advantage of access to HPC resources to integrate remote sensing models being used to estimate evapotranspiration (ET).
OPCFW_CODE
My background and how I got here Hello! My name is Todd Burlington and I am currently a physics undergraduate student at the University of Exeter. Starting next week I will be going into my fourth and final year 😨. During the summer I have been working in the Informatics lab on a Summer Placement. This is an official 12 week scheme offered by the Met Office, and it means around 50 students work across the Met Office over the summer. I’ve been working in the Informatics Lab which has been absolutely great as it’s clearly the best part of the Met Office (no bias here what so ever!). Why did I pick the lab? I love their fundamental principal of exploring the intersection between science, technology and design. This appeals to me for a few reasons; firstly I am a scientist (well, a scientist in training anyway), and secondly I am passionate about technology. So of course, with the lab being a place that combines these, I knew this was where I had to be! Sign me up I said… My experience in the lab Before I got here and on my first day I didn’t really know what to expect. The one thing I was expecting was to do a lot of coding and software development, which was a bit scary! My degree has given me some basic level of coding knowledge, I have done some C and Matlab, but I wasn’t sure I would be up to scratch. After being here only a few days, however, I realised that I needn’t have been worried, especially with the support from the other guys in the lab. There are some skills you do need though, and I think these are what makes the lab the lab! So what do you need? Well let’s list them: • An open mind • An eagerness to learn • A willingness to work as a team • A problem solving attitude If you can do this stuff then you won’t just work, but excel in this sort of working environment. This basically sums up my time in the lab, constantly learning new things! What I have gained from the lab I have gained a lot! It surprised me just how much I have actually gained, a lot more than a summer’s worth of work I am sure. The main skills gained are the most obvious example of this. I think it is best to list these: - GitHub and version control - Responsive website design - HTML, CSS, SASS development - Across the stack development - Linux and Raspberry Pi - Bash commands The less obvious examples of what I have gained are the insights into this new working culture. The lab is fundamentally a different place to work. You share a communal desk which becomes the centre for knowledge sharing and I think this is the biggest and quite frankly best part of the lab. I’ve come to this conclusion because the only way to move forward on a project is to talk about it! This communal working allows you to do this, by asking questions sharing knowledge and understanding each other’s experiences. Another benefit is that if you have any issues you just have to say them out loud and I can guarantee you will get a response. This might be a solution or a conversation to help figure out the problem. Projects I have worked on I have worked on a few projects whilst I have been in the lab. The biggest and best is the #technorhino project. I have written a blog post all about this so I won’t bore you all with it all again. Some other projects which I have worked on to varying degrees are: • AR Sandpit • D3 weather visualisation - still ongoing • Euporias ceramic bell project - still ongoing This variety keeps your time in the lab fresh and interesting! There is always something else to swap onto if you are frustrated with your current work or just interested in something else. It has been a pleasure to have this opportunity. The lab has fundamentally changed my outlook on the world of work. I was expecting to graduate university next year and go and get a job. While this is still the plan, what I now expect from that job is completely different! I cannot see myself working somewhere with a ‘traditional’ working culture. I just love the freedom and rapid learning that comes from a place like this. I will aim to either find somewhere that also follows these principals, or try to change any future organisation I work in so that they also see the benefits of this culture. So that is my future - but what about yours? If all of this sounds right up your street then apply for the lab! The summer placement scheme will soon be open for 2017, so keep and eye on that, or you could even try nicely ask the guys if you can visit for a day. Whatever you decide, I hope that you too try and change the world of work so that it can also follow the lab model.
OPCFW_CODE
If you really want to hurt Google, figure out a way for developers to make money off of apps in F-Droid, to the extent that they choose to remove their apps from the Play Store. @freakazoid that doesn't hurt google *at all*. Those two already peacefully co-exist. If we want to "hurt google" and make their monopoly less pervasive, the people writing their code have to be at the forefront of demanding that it's open and platform agnostic. Only that will put google in check. @nergalur In no sense do Google and free software peacefully coexist. Google is incredibly hostile to free software, only leveraging it where they can exploit it to expand their power, and doing everything they can to contain and destroy it otherwise. Google wishes to destroy free software in favor of "you can look but good luck doing anything interesting with it without us getting a cut." @nergalur This latter of course being the model ESR sold us all. @nergalur Here are some examples for you: Google employees aren't allowed to install AGPL software on their company laptop or desktop even for personal use. Google employees are not allowed to work on open source software projects without permission, unless it's a project Google is already working on officially. It is MUCH easier to get permission if you assign copyright for all your code to Google. Their policy toward free software is "embrace, extend, extinguish." @nergalur And of course there's the GPL prohibition in user space on Android, even though the requirements for using GPL software would not have been onerous. And now they're working on replacing the only GPL component of Android, the kernel, with one that's not even under Apache v2, but under MIT with only specific patent allowances instead of the blanket protection Apache v2 provides. @nergalur But anyway that wasn't your main point. Unfortunately I don't understand your main point. Which people writing their software? Google employees? The ones who care are long gone. Third party Android devs? They are often actively hostile to free software, violating GPL left and right. @freakazoid you're right that "peacefully" probably isn't the best word to use so I'll grant you that. But in practice f-droid hasn't threatened Google's monopoly on the android ecosystem or their exploitation of free software. It's provided a nice alternative for tech nerds but not much else. My point is that the alternative simply existing won't do shit to curb Google's expansion in market share or their attacks on free software. @freakazoid the only way to actually reign in Google is for the people who work for it to build the software they're writing on their own terms which means they've gotta be willing to actually fight back on this shit. They have to demand a say in how they get to write their own code. We've seen they have power when they organize, but it has to be sustained and political. That and we need to publicly fund free software development. Both need to happen in tandem @nergalur I do think we need to try to get Googlers to organize, but I don't think they have much hope of getting Google to make any but token changes. All Google has to do is pay them lip service and make minor changes with no real impact and wait for the most vocal to get frustrated and leave. Meanwhile everyone they hire is willing to work there despite their public image. @nergalur If public funding of free software development ends up being anything like public funding of research in the US, developers will be spending far more time writing grants than software. And only really mainstream software will get funded. And of course software aimed at protecting marginalized communities won't get any funding at all. @freakazoid Those workers can just make those changes themselves if they get organized enough, permission from google be damned. It's a ways off but the only realistic way to curb the power the execs of Google is for their workers to unionize. The second part is a problem with the bureaucracy of the current gov't, not public funding. Software protecting marginalized communities already doesn't get funding anywhere period so 🤷. At least a union could raise that issue potentially @nergalur While having Google's employees organize would be fantastic, having worked there for 3 years I put the chance of US engineers there unionizing at much less than 1%. In fact I suspect a majority of their engineers would resist unionizing, a quarter of them quite strongly. @nergalur As for public funding, without a bureaucracy that can do it in a way that provides a net benefit, it hardly seems worth talking about. If they can't fund science or education properly they can't fund software development properly. Meanwhile, we still have the problem of everyone who can code getting offered $$$ to work on systems that are destructive to our rights and not feeling like they have much choice. @freakazoid sure, i'm not saying it's easy or that everyone would be on board. Nor are engineers the only employees that make Google run. I'm just saying it's the only ones with the power (and possible interest) in putting checks on Google are their workers. There is no more running away from it with alt tech or gov't regulation because they've proven incapable of even presenting a speed bump. But when workers protested against Maven it was very much a speed bump. @freakazoid so to respond to the OP, supporting F-droid is not a strategy that will de-google android because it has no control over the underlying codebase. Only Google's workers have the power to change that. Of course we could just use alt hardware/mobile OSes but that doesn't undo any of the infrastructure google has built which cannot simply be ignored as a real obstacle to software freedom. Google must be dismantled by its workers. The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
OPCFW_CODE
Check it out, guys: a mere day after creating the TaoControls project on BitBucket, I’m here with a sweet new update: the I developed this control to address what I felt was a pretty big hole in the functionality of the System.Windows.Forms.TextBox class (which happens to be the base class from which DelayedTextBox derives). As just about any developer familiar with Windows Forms knows, the TextBox (along with many other controls) has a TextChanged event, which gets raised immediately as soon as the text in the box is changed in any way—even by a single character. I don’t know about you, but I’ve gotten pretty used to UIs that update dynamically as I’m typing (consider the “search as you type” feature that Google introduced not too long ago, for example). But at the same time, as a user I also demand that my UI remain responsive, always (in my opinion, an unresponsive UI has got to be among the top 3 most frustrating shortcomings of any software application featuring a GUI). The problem with the TextChanged event is that it’s indiscriminate, which makes it difficult to accommodate both of these requirements: dynamic updates + responsive UI. For instance, the “Dashboard” application used by the traders at my company to monitor our overall trading activity includes a feature that allows the user to type in a symbol and immediately see the “book” (if any) where that symbol is being traded. Originally this was done by attaching a handler to the TextChanged event of the text box in question and performing the search in there. I turns out this posed a problem: on relatively rare occasions, for reasons outside the scope of this post, the search would take… kind of a long time. Like, maybe 1–2 seconds. Which, from the user’s perspective—especially a trader’s perspective—is pretty disconcerting. If I was standing in the trading office at the time this happened, I would hear one of the traders curse and grow quickly panicked: “Call the exchange, the system’s frozen!” Of course, after everything went back to normal within a second or two, things settled down quickly. But it struck me that reacting to every single keystroke (effectively) within a TextBox was, for our purposes, overkill. Really, I thought to myself, there’s no need to do that search until after the user’s finished typing. We could always have done away with the FAYT (“find as you type”) functionality altogether and added a button labeled “Search,” of course. But this goes against one of the principles I mentioned earlier: having a UI that dynamically updates itself. So I set to work on a control that would offer the functionality I felt we really needed: a TextBox that raises an event only after a certain amount of time has elapsed. This is basically what DelayedTextBox is: in addition to all of the properties and methods available to Textbox (which, by the way, still includes that old TextChanged event should you need it), it comes with two additional members: a DelayMilliseconds property, specifying how much time to “wait” after the user changes the text to raise an event; and a TextUpdated event, which gets raised after said time has elapsed. Why not download it and give it a try? The downloadable .zip file includes an executable demo of all (2) controls in the library, so you can certainly give them a try before you decide whether or not you want to use them. I’d include a screenshot, but it really looks just like a regular
OPCFW_CODE
Why extra bits of filament are extruded when printing the first few layers? Hi, I installed a brand new hotend on my i3MK3S+. I noticed that just like the old one, when the printer was drawing an outline for the print, often extra bits are extruded. For example, when it is supposed to draw a straight line, it becomes ------O------- where O is a dot of extra filament. Such extra filament got pushed around by the nozzle and sometimes formed burnt stuffs along the outer edge of the prints or inner edge of big holes. Sometimes this also happened when other layers were printed. What is happening? As it is brand new, leakage and old nozzle are eliminated as the reason. How can we prevent this from happening? A photo would definitely help. Do you have a good live-Z? This is a good way to check if you haven't used it before: I will try to take it next time. Yes, I had a good live-Z. The surface was butter smooth. Also - turn on NORMAL printing rather than STEALTH. This will highlight if there is a motion problem. I am at Quality. Never used Stealth. Stealth has nothing to do with "quality" ... as far as I know. It is all about motor control currents and driver frequencies. You select STEALTH or NORMAL in the printer menus. I am not sure if we are talking about the same thing. Under Print settings on the right, it says that I am under Quality. There is a Speed and a Draft option but no Stealth. Under Printer Settings, Dependencies, it says "default print profile: 0.15mm Quality @MK3". Under SETTINGS, there should be a MODE option that has NORMAL and STEALTH - and below that another option for CRASH DETECTION on or off. You want crash detection on to see whether the printer is stalling as the extruder moves. Once you've verified you aren't having stalls causing the dots, then you can go back to stealth to quiet the printer down. On the other hand, you may never have enabled stealth mode and crash detection is enabled. Either way - good to know if it is or isn't enabled. Again - this is a setting on the printer, not in the Slicer. Probably just material oozing from the nozzle. The Mini now heats up in a two stage process. First to 160 degrees, at which it performs the bed leveling, then to target temperature. The Mk3S's startup gcode takes it straight to target temperature at which filament can ooze from the nozzle and result in artifacts such as the ones you described. I'm using the following custom start gcode (under Printer Settings/Custom-G-code/Start G-code), adapted from @bobstro's custom code: ; PrusaSlicer start gcode for Prusa i3 Mk3S ; Last updated 20210129 - RF - Adopted from Bob George M862.3 P "[printer_model]" ; printer model check M862.1 P[nozzle_diameter] ; nozzle diameter check M115 U3.9.3 ; tell printer latest fw version ; Set coordinate modes G90 ; use absolute coordinates M83 ; extruder relative mode ; Reset speed and extrusion rates M200 D0 ; disable volumetric e M220 S100 ; reset speed ; Set initial warmup temps M104 S160 ; set extruder temp to 160 to prevent oozing M140 S[first_layer_bed_temperature] ; set bed temp M109 S160 ; wait for extruder no-ooze warmup temp before mesh bed leveling, cool hot PINDA G28 W ; home all without mesh bed level G80 ; mesh bed leveling ; Final warmup routine G0 Z10; Raise nozzle to avoid denting bed while nozzle heats M140 S[first_layer_bed_temperature] ; set bed final temp M104 S[first_layer_temperature] ; set extruder final temp M109 S[first_layer_temperature] ; wait for extruder final temp M190 S[first_layer_bed_temperature] ; wait for bed final temp ; Prime line G0 Z0.15 ; Restore nozzle position - (thanks tim.m30) G92 E0.0 ; reset extrusion distance G1 Y-3.0 F1000.0 ; go outside print area G1 E2 F1000 ; de-retract and push ooze G1 X20.0 E6 F1000.0 ; fat 20mm intro line @ 0.30 G1 X60.0 E3.2 F1000.0 ; thin +40mm intro line @ 0.08 G1 X100.0 E6 F1000.0 ; fat +40mm intro line @ 0.15 G1 E-0.8 F3000; retract to avoid stringing G1 X99.5 E0 F1000.0 ; -0.5mm wipe action to avoid string G1 X110.0 E0 F1000.0 ; +10mm wipe action G1 E0.6 F1500; de-retract G92 E0.0 ; reset extrusion distance ; end mods It mimics the two-stage process employed by the Mini, to avoid oozing. Also, as the nozzle is heating up, look for any filament oozing from the nozzle and pull it away with needle-nosed pliers or tweezers. Thanks. I don't know anything about gcode. Do I just open the original one from a text editor and then copy and paste the one you posted at the beginning of mine? That's one way of doing it. My recommendation is to go into Prusaslicer, switch to Expert mode in the upper right corner. Now when you go to Printer Settings, a new option Custom G-code is available in the list on the left. When you click on it, you see Start G-code on the left. Replace what's in that field with the startup code I posted before. Now, to make this permanent, click on the little floppy disk icon to the right of the dropdown menu with the available printer profiles. Then give this new preset a meaningful name, and it will be saved as a User Preset you can now select as a printer profile for future projects. Thanks. I cannot find the floppy disk icon. There is a wheel-like icon instead. Are they the same? When I placed the cursor on top of it, it says "Click to edit present". It has four options: Edit preset, Edit physical printer, Delete physical printer, and Add physical printer. Which option should I use? You can get there by switching from the Plater tab to the Printer Settings tab (or Ctrl-4), or in the Plater view click the gear icon next to your currently selected Printer profile and select Edit preset. Found it. Thanks Hi, I just tried a print with the replace gcode. It looks like the printer raised to the target temperature of 230c. Then, moved over those 49 different points across the build plate. Next, it drew the prime line at the front left. Waited there and until temperature dropped to 160c. Then, moved to those 49 points over the build plate again. Next, it moved back to home position. Wait for the temperature to rise to 230c, redraw a prime line on top of the one it made. Then, started making the actual print. Is that correct? No, that's not okay. This is what would happen if you appended the new startup code to the old code instead of replacing the old code completely. Should only do the bed leveling and prime line once. I highlighted the original one and deleted. Then, copied and pasted the code you posted. I will re-do it again. Thanks. Just double checked. Under Start G-code, it has the same code you posted. What else could have gone wrong? I saved it under a new name. It is currently selected with a green flag before the name. [insert puzzled looking emoji here] There's only one G80 gcode in there so I don't see how in the world it would do two mesh levelings.... Okay, if you open the Gcode in a text editor and check the beginning, it should only have my code, and only one G80 in the whole file. Or, you can save the project as a 3mf file, zip it, and upload it here. This way we have all the settings and can take a look. Not sure though I'll be able to do much today as I'm heading out of town for the weekend.
OPCFW_CODE
Why does Apache Pulsar fails to start the broker service when enabling OpenID Connect Authentication? After following the documentation (and answers in SO questions 1 & 2) to enable OpenID Connection Authentication the pulsar.broker service fails to start and provides me with a flurry of 401 not authorised errors. I was not expecting this as I thought authentication/authorisation was only initiated from a pulsar client where an access token can be supplied. Here are an example of some of the 401 errors presented to me: lin-0afa7c37.mstarext.com pulsar[12971]: 2023-07-07T10:19:33,590+0000 [main] ERROR org.apache.pulsar.functions.worker.PulsarWorkerService - Error Starting up in worker lin-0afa7c37.mstarext.com pulsar[12971]: org.apache.pulsar.client.admin.PulsarAdminException$NotAuthorizedException: HTTP 401 Unauthorized .... lin-0afa7c37.mstarext.com pulsar[12971]: Caused by: javax.ws.rs.NotAuthorizedException: HTTP 401 Unauthorized .... lin-0afa7c37.mstarext.com pulsar[12971]: 2023-07-07T10:19:33,597+0000 [main] ERROR org.apache.pulsar.broker.PulsarService - Failed to start Pulsar service: org.apache.pulsar.client.admin.PulsarAdminException$NotAuthorizedException: HTTP 401 Unauthorized .... lin-0afa7c37.mstarext.com pulsar[12971]: 2023-07-07T10:19:33,598+0000 [main] ERROR org.apache.pulsar.PulsarBrokerStarter - Failed to start pulsar service. lin-0afa7c37.mstarext.com pulsar[12971]: org.apache.pulsar.broker.PulsarServerException: java.lang.RuntimeException: org.apache.pulsar.client.admin.PulsarAdminException$NotAuthorizedException: HTTP 401 Unauthorized Is there some other configuration that I need to update that's not mentioned in the referenced documentation above? When authentication is enabled, the Pulsar Broker, Proxy, and Function Worker must be configured to use authentication. Since OIDC is an OAuth2 implementation, you can follow the OAuth2 docs here: https://pulsar.apache.org/docs/3.0.x/security-oauth2/#enable-oauth2-authentication-on-brokersproxies. The broker.conf will look something like this (it varies based on your OAuth2 provider): brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2 # When client credentials are stored in a file brokerClientAuthenticationParameters={"privateKey":"file:///path/to/privateKey","audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/","issuerUrl":"https://dev-kt-aa9ne.us.auth0.com"} # When client credentials are stored in a base64 string brokerClientAuthenticationParameters={"privateKey":"data:application/json;base64,privateKey-body-to-base64","audience":"https://dev-kt-aa9ne.us.auth0.com/api/v2/","issuerUrl":"https://dev-kt-aa9ne.us.auth0.com"} # If using secret key (Note: key files must be DER-encoded) tokenSecretKey=file:///path/to/secret.key Note also that if you are running in Kubernetes, it is possible to mount a service account token projection and use that token for authentication. Thanks for the response michael, understood. Even after configuring the broker.conf as suggested I still get an error. The error I received was an access denied error. Even though I supplied what I thought was the correct client credentials in a json file. Error: client.api.PulsarClientException$AuthenticationException: Unable to obtain an access token: Unauthorized (access_denied) privatekey.json file: {"type": "client_credentials", "client_id"<EMAIL_ADDRESS>"client_secret": "sasdiQM....asds", "issuer_url": "https://someurl.com/"} I've tested my client credentials via postmans and can retrieve an access token. The only difference being an access token url is provided (.../token/oauth) I believe my issue is actually related to the token endpoint url being presented in the "https://login.someweb.com/.well-known/openid-configuration" documentation. Annoyingly this will take a while to update to the correct url. Is there anyway in Pulsar that I can override this with the correct url? @raah - what is the issue with the URL? The .well-known/openid-configuration is added to the issuer_url to retrieve the token authorization endpoint, which is part of the OIDC spec (and possibly the OAuth2 spec). apologies for the delayed response. The issue was the token url presented in the openid-configuration was not the url I was expecting/what it should have been.
STACK_EXCHANGE
This is a ReactJS based personal resume website template. I have built this by following a Udemy course (credits below) and by beginning with the Ceevee template by Styleshout (credits also below), and breaking up their template into isolated React components. Data is fed directly from a JSON File. This means that in its final form, it can be customized and used by anybody simply by filling in their own personal info into the JSON file and the changes will be dynamically fed into the site. If you would like to use this template for your own personal resume website, read on to learn how to build your own copy. please make a MAKE A DONATION NOW TO KEEP OUR SITE LIVE if you have any dificulty setting this app up leave a comment in the comment section i will be glad to help you. Make it Your Own! feal free to give us a $5 donation 1. Make sure you have what you need To build this website, you will need to have Node >=6 downloaded and installed on your machine. If you don’t already have it, you can get it HERE 2. Build a Create-React-App Next, you will build the initial application using a handy tool called Create-React-App. This allows you to get up and running with a React app without the headache of setting up build-tool configurations. Go HERE to get started. When the app building is finished run cd yourappname and run npm start to test it out. Hit ctrl+c in the terminal when you want to stop the server that the above command starts. For this project we will also need to install JQuery and ReactGA, do this by running npm install jquery --save and npm install react-ga --save in your terminal while inside your project folder. YOU MUST RUN THESE COMMANDS. 3. Download the template Once you have a React app up and running by following the steps in the above link, download the code by clicking here and download the zip file. unzip it to your desktop or any location, All you will have to do now is replace the “public” and “src” folders of your newly built app with mine that you just downloaded. If you run npm start now, you should see that your app renders the same as the one at the live demo link above. if you face any chanlenges leave a comment i will assist you. 4. Replace images and fonts Next, you will want to replace the images, and fonts if you like, with your own. All you have to do is replace the images at public/images/header-background.jpg, public/images/testimonials-bg.jpg and public/favicon.ico with your own. YOU MUST KEEP THE SAME NAMES ON THE IMAGES. 5. Fill in your personal info To populate the website with all of your own data, open the public/resumeData.json file and simply replace the data in there with your own. Images for the porfolio section are to be put in the public/images/portfolio folder. 6. Make any styling changes you would like Of course, all of the code is there and nothing is hidden from you so if you would like to make any other styling changes, feel free! 7. Enjoy your new Resume Website When you’re all done, run npm start again and you’ll see your new personal resume website! Congratulations! also checkout our how to create a game like candy crush from scratch
OPCFW_CODE
Photo: Sebastian Krog Knudsen/AU Travelling to Aarhus: https://conferences.au.dk/getting-to-aarhus-and-aarhus-university Accommodation in Aarhus: https://conferences.au.dk/accommodation-in-aarhus Getting around in Aarhus: https://conferences.au.dk/getting-around-in-aarhus Other practicalities: https://conferences.au.dk/practical-information Please note that Monday June 6th is a Danish public holiday. Venue: The workshop will be held in the Peter Bøgh Andersen auditorium (building 5335, room 016), Finlandsgade 23, 8200 Aarhus N: https://goo.gl/maps/ZGTk6Ttiv74dyNSb6 Workshop Dinner: On Thursday June 9th at 18:00, there will be a nice workshop dinner for all attendees. The workshop dinner will take place at the Math Department, Ny Munkegade 118, 8000 Aarhus C. You will find further details in the program. The workshop dinner is supported by Partisia Blockchain. Rump Session: There will be a Rump Session on Thursday June 9 as part of the workshop dinner. The rump session will take place in the "Auditorium E" at the Math Department, Ny Munkegade 118, 8000 Aarhus C. The Rump Session will be chaired by Carsten Baum and Daniel Tschudi and entertaining presentations on recent results, breaking news and other topics of interest are encouraged. To submit a talk, please send an email with the subject Rump session TPMPC 2022 to email@example.com and firstname.lastname@example.org with the following details, by 1 pm on Thursday June 9 including: WiFi: The university wifi network for guests is called "AU Guest". Further details here: https://medarbejdere.au.dk/en/administration/it/guides/network/wirelessnetwork/#c1894383 Ran Canetti (Boston University) Nishanth Chandran (Microsoft Research) Chaya Ganesh (IISc Bangalore) Vipul Goyal (Carnegie Mellon University) Shai Halevi (Algorand Foundation) David Heath (Georgia Tech) Lisa Kohl (CWI Amsterdam) Yehuda Lindell (Coinbase) Giulio Malavolta (Max Planck Institute) Antigoni Polychroniadou (JP Morgan) Sven Trieflinger (Robert Bosch GmbH) Sophia Yakoubov (Aarhus University) Call for Contributed Talks TPMPC solicits contributed talks in the area of the theory and/or practice of secure multiparty computation. Talks can include papers published recently in top conferences, or work yet to be published. Areas of interest include Theoretical foundations of multiparty computation: feasibility, assumptions, asymptotic efficiency, etc. Efficient MPC protocols for general or specific tasks of interest Implementations and applications of MPC The TPMPC steering committee will select talks with the aim of constructing a balanced program that will be of interest to the audience. Contributed talks will be 20-30 minutes. The deadline for contributed talks was on Friday April 1st (anywhere on Earth). Notification of acceptance is expected by April 23rd April 26th. Please submit a short abstract (with a 1-2 page summary, up to five pages total) of your proposed talk via the submission system: Submissions should include the list of co-authors. On the submission system, you will also be asked to name the expected speaker(s), and confirm whether you will give the talk in-person (in-person talks will be prioritized, but remote talks may still be considered). If your talk is an already published work, please include a link and information about where it was presented. Carsten Baum (Aarhus University) Ivan Damgård (Aarhus University) Divya Gupta (Microsoft Research) Carmit Hazay (Bar-Ilan University) Claudio Orlandi (Aarhus University) Emmanuela Orsini (KU Leuven) Benny Pinkas (Bar-Ilan University) Peter Rindal (Visa Research) Dragos Rotaru (Cape Privacy) Peter Scholl (Aarhus University) Phillipp Schoppmann (Google) Thomas Schneider (TU Darmstadt) Sophia Yakoubov (Aarhus University) Diego Aranha, Aarhus University Carsten Baum, Aarhus University Ivan Damgård, Aarhus University Jesper Buus Nielsen, Aarhus University Claudio Orlandi, Aarhus University Peter Scholl, Aarhus University Sophia Yakoubov, Aarhus University Malene B.B. Andersen (email@example.com) The TPMPC 2022 is supported by the European Research Council (ERC) under the European Unions’s Horizon 2020 research and innovation programme under the project "Secure, Private, Efficient Multiparty Computation" (SPEC), the Carlsberg Foundation under the Semper Ardens project "Center for Blockchains and Electronic Markets" (BCM) as well as Partisia Blockchain.
OPCFW_CODE
Currently, Machine Translation provides automatic recognition and translation of Arabic, Russian, French, Portuguese, Thai, Turkish, Spanish, Vietnamese, Indonesian, English, and Chinese, with deep optimization of scenarios, such as commodity titles, commodity descriptions, commodity reviews, and seller-buyer communication. - Title translation supports translation from English to Arabic, Russian, French, Portuguese, Thai, Turkish, Spanish, Vietnamese, Indonesian, and Chinese. - Commodity description supports translation from English to Arabic, Russian, French, Portuguese, Thai, Turkish, Spanish, Vietnamese, Indonesian, and Chinese. - Commodity review or seller-buyer communication supports translation from English to Arabic, Russian, French, Thai, Turkish, Spanish, Vietnamese, Indonesian, and Chinese. Machine Translation provides high-quality translation between Chinese and English. It adopts Alibaba’s advanced neural network translation framework, and is applicable to daily communication, traveling abroad, and other scenarios. More languages will be added in the future. - en English - ru Russian - fr French - zh Simplified Chinese - es Spanish - pt Portuguese - ar Arabic - tr Turkish - th Thai - vi Vietnamese - id Indonesian |10001||The request has timed out.||Send the request again.| |10002||The error message returned when a system error has occurred.||Send the request again.| |10003||The error message returned when URL decoding fails.||Make sure that the string is encoded by using UTF-8 and URL encoding is correct.| |10004||Parameters are missing.||Check input parameters.| |10005||The error message returned when the language is not supported.||Make sure that both the source and target languages are supported.| |10006||The error message returned when the system fails to recognize the string.||Make sure that the string you have passed in is correct.| |10007||The error message returned when the translation fails.||Check whether the translated string is correct.| |10008||The character length is too long.||Check the character length of the source text. The source text can be called multiple times. The length is limited to 5000 characters.| |10009||The RAM user is not authorized.||Use the primary account to authorize the RAM user.| |10010||The account does not activate the service.||Activate Machine Translation first.| |10011||The RAM user encounters a service failure.||Contact Customer Support.| |10012||The translation service cannot be called.||Contact Customer Support.| |10013||The account does not activate the service or has overdue payment.||Activate the service or clear the overdue payment.| |19999||An unknown error has occurred.||Contact Customer Support.| Partial result customization is not supported for the moment. If part of the translation is incorrect, contact Customer Support to correct it. The length of the text string for a single translation cannot exceed 2000, and is calculated by using String.length() <=2000 of Java. Currently, only text translation is available. The APIs for voice translation and image translation will be provided in the future. Contact Customer Support if you need custom translation services. - Check whether the aliyun-java-sdk-core and aliyun-java-sdk-alimt versions are normal. - Change the code writing. DefaultProfile profile = DefaultProfile.getProfile( "cn-hangzhou", // Region ID accessKeyId, // AccessKey ID accessKeySecret); // Access Key Secret IAcsClient client = new DefaultAcsClient(profile); //Create API requests and set parameters TranslateECommerceRequest eCommerceRequest = new TranslateECommerceRequest(); eCommerceRequest.setMethod(MethodType.POST); // Set request method,POST eCommerceRequest.setFormatType("text"); // format type eCommerceRequest.setSourceLanguage("en"); //source language eCommerceRequest.setSourceText("book"); //source text eCommerceRequest.setTargetLanguage("zh"); //target language DefaultProfile.addEndpoint("cn-hangzhou", "alimt", "mt.cn-hangzhou.aliyuncs.com"); TranslateECommerceResponse eCommerceResponse = client.getAcsResponse(eCommerceRequest); Log on to the Alibaba Cloud console with the primary account, and click RAM to enter the Policy Management page. Add a policy. Authorize the RAM user to call the Machine Translation APIs. How do I define non-translated elements in the source text and return those elements as they are in the translation? The source text for translation may contain many elements that do not need to be translated, such as words, abbreviations, and code. Machine Translation provides tags used to define non-translated elements in the source text. Insert
OPCFW_CODE
Updated January 20, 2023 This article applies to: - Terrain 3D - Terrain Forestry - RoadEng Civil - RoadEng Forestry Please download the associated files to go with this example: The Terrain module uses a triangular irregular network (TIN) to represent surfaces. Breaklines can be used to represent smooth linear breaks in topography such as creeks, ditch bottoms, road shoulders and others. Breakline elevations are defined between points (linear interpolation), thus breakline segments must be represented by triangle edges. Triangles cannot cross breaklines. Crossing breaklines (that do not share a point at the intersection) define more than one elevation at that point and are therefore inconsistent. The following example shows how breaklines improve a model and how to find and fix crossing breaklines. Skip to the Crossing Breaklines section if you are only interested finding and removing crossing breaklines. In this section we will remove the breakline properties from some features to see what affect this has on the Terrain. 1.Open file “BoundaryRoad.ter” included with this example. Figure 1 – Plan View of Boundary Road Terrain with contours 2. Use the Window | New Window | Graphics | 3D menu to view this surface with the 3D Window (optional). Switch back to the Plan window afterward. Figure 2 – 3D view of a portion of the surface 3.Select the features that define the road corridor (including top of cut). Figure 3 – Selection of road corridor features (and some additional features) with the mouse. 4. Menu Edit | Modify Selected Feature(s) | Properties Ctrl-E (also available from the right click menu) pops up the dialog box below. Figure 4 – The Breakline item is partially checked indicating that some of the selected features have this property turned on. 5.Clear the Breakline check box and press OK. 6. Re-calculate the Terrain model (menu Edit | Terrain Modeling | Calculate Terrain Model; keep the existing settings). Figure 5 – Plan and 3D view of surface recalculated with breaklines removed. NOTE: How the contours now spill onto the road; the original breaklines prevented triangles from crossing over the ditch and right hand road edge. 7. Do not save your changes! NOTE: Once you have created a feature that represents a breakline, use the Features Properties dialog to set the Breakline property. Recalculate your TIN model after modifying or adding breaklines. This section shows how to find and fix crossing breaklines. 1. (Re)Open file “BoundaryRoad.ter” included with this example (do not save changes if continuing from the previous section). 2. File | Insert File “Creek.ter”, also included with this example (press OK to accept the default import options). This inserts a single breakline feature called CREEK. 3. Re-calculate the Terrain model (menu Edit | Terrain Modeling | Calculate Terrain Model; keep the existing settings). You will be presented with the following error messages Figure 6 – This message is presented for every crossing breakline if you clear the’ Do not show me this message again’ box. Figure 7 – This message is presented at the end of the triangle processing. In this case it is clear where the crossing breaklines occur and which feature needs attention. However, in another model there may be many crossing breaklines and it may be hard to find them just by looking. The following steps will use the XBreak attribute to find crossing breaklines. 4. Use the Window | New Window | Text | Points menu to add a Points window to the panel at the right of your screen. This window is already set up to show feature name and the XBreak attribute. You can choose which columns to display in the Points Window options dialog box (available from the right click menu). The text windows (both Features and Points) display a list of features; you can change this list by selecting the desired features and then updating the displayed list. We will do this in the next steps. 5. Use the Edit | Select Feature(s) | By Property dialog box to select all breakline features (also available from the right click menu). Figure 8 – Only Breakline features will be selected after pressing OK. 6. Press the Points button at the bottom left ribbon. Figure 9 – Points window display selected features button. 7. Press the XBreak column heading to sort the points by this attribute. You may have to scroll and or press the column heading twice to sort with the XBreak items at the top (see figure below). 8. Select the XBreak point at the top of the list with your mouse. Points tagged with the XBreak attribute will be at one end of a crossing breakline segment. Figure 10 – Points window sorted by XBreak. The Plan window shows the selected CREEK feature with the XBreak point highlighted. When you click on an item in the Points window, if it is not visible, the Plan window will scroll this point to the middle of the window. Similarly, the point will be selected and set to the current point (and displayed in the status window). You may wish to experiment with this behavior by scrolling you Plan window, selecting another feature and then repeating the step above. There are many different ways to “fix” a crossing breakline: - Clear the Breakline property from one of the offending features. - Insert a new point in one breakline feature and snap it to a point in the other. Breaklines are allowed to cross if they share a common point. - Remove a segment from one of the breaklines (making it into two features). In this example the creek actually flows through a culvert under the road. We will remove this segment from the surface model. 9. Use the Edit | Modify selected Feature(s) | Break | At Current Point (Ctrl-Q) menu item to break the CREEK feature to the left of the road (the current point shown in the figure above). Reply OK to the Triangles will be cleared warning. 10. Select the first point to the right of the road (figure below) and repeat the step above the break the feature a second time. Figure 11 – Second break point In CREEK feature. 11. Select the middle feature and display its properties (Menu Edit | Modify Selected Feature(s) | Properties Ctrl-E). 12. Clear the Modelled property (this automatically clears the Breakline property) so that this feature will not be part of the surface. Rename it as CULVERT. Figure 12 – Middle portion of CREEK breakline isolated and renamed as CULVERT. It is no longer a breakline and will not contribute to the surface model. 13. Re-calculate the Terrain model (menu Edit | Terrain Modeling | Calculate Terrain Model; keep the existing settings). NOTE: That there is no error messages. Also the model (as indicated by contours) is not noticeably different from the one generated when there were crossing breaklines. The software attempts to accommodate the crossing breakline inconsistency and, in this case, it did a pretty good job. However, it is not wise to ignore inconsistent triangle error messages. 14. Choose menu File | Exit. Don’t save changes.
OPCFW_CODE
Updated for 2019 An oxymoron can occur in a year, everything and nothing can change. That's exactly what happened since I wrote this guide in late 2017. Now in January 2019, I figured it was time to update this guide. I've had an affinity for image optimization, going as far as to test out Google's guetzli, a hyper-optimized JPEG encoder and running a comparison of ImageOptim vs. Squash 2. I decided to revisit something I've meant to test out: Alternative image formats. For those who aren't as versed in image compression, our choices are familiar for bitmapped images: JPEG (lossy, no alpha), PNG (Can do alpha, optimized by pre-processing with lossy strategies), BMP, and GIF (LZW and a complete waste of bandwidth). However, there are other image formats that browsers support: - JPEG 2000 (Safari 5+) - supports alpha, RGB/CMYK, 32-bit color space, lossy or lossless - JPEG-XR (IE9-11, Edge) - supports alpha, RGB/CMYK, n-channel, lossy or lossless, progressive decoding, 32-bit color spacing - Webp (Chrome, Opera, Android Browser, Edge 18+, Firefox 65+) - supports alpha, lossy or lossless. If you're keeping score, you're probably noticing two things: That's a fractured landscape, and FireFox wasn't mentioned, both are true. When I first wrote this guide, FireFox hadn't backed a format, and instead chose to write its own JPEG encoder (MozJPEG) that offers a 5% data savings as opposed to adding support for WebP, or either JPEG format. However, in FireFox 65, on target for early 2019 will soon support WebP and with Edge switching to Chromium comes WebP support. So far no browsers support HEIF (high-efficiency image format), Apple's newly preferred image capture format for iOS, although I documented recently the fate of its support on macOS recently. (it's actually quite good). From my experiences, avant-garde image formats shouldn't necessarily be viewed as saving pace so much as delivery more quality at the same file size as each format seems to yield varying results, especially in the cases of JPEG2000 and JPEGXR, as drastically reducing the file size will produce so-so results. However, comparing like images at the same file size, both yield better results highly tuned JPEG libraries like MozJPEG or guetzli. As a general rule, you shave about 10% off of a JPEG (after optimization) with JPEG2000/XR to deliver like results, JPEGXR being the laggard of the of the two. WebP tends to deliver like results at slightly better file sizes. Combining classic bandwidth saving strategies like: minification of CSS, uglification of js, lazy loading, inlining the crucial CSS to speed up time-to-paint, and such with avant-garde formats can shave off hundreds of kilobytes. Looking forward: WebP, HIEF, and AVIF When I wrote this guide, Internet Explorer had a bit more of a foothold, and it was unclear of Edge would manage to dig itself out of its hole. JPEGXR's future also is unclear if Microsoft will continue support for XR now that it's moving towards WebP. For the obsessive or people with user bases with heavy Internet Explorer usage, JPEGXR is still an option, but for everyone else, I'd skip JPEGXR moving in 2019. hen there's BPG and FLIF, which no browsers have embraced nor announced support for, effectively banishing them to the realm of the forgotten. Apple is the only WebP holdout and has not announced plans to support it. In the iOS 10 beta and macOS Sierra beta, Apple tested WebP support but removed it before finalizing support. Strangely, despite Apple's investment in HEIF, it's not even supported in Safari's technical preview. I'm not holding my breath for WebP support on Safari. Lastly, WebP might a be a bump in the road to AVIF (AV1), a royalty-free open joint venture from engineers at many firms such as Mozilla, Google, Cisco and so on. FaceBook raved about AV1 as superior to x264 and VP9 video formats. Mozilla plans on supporting AVIF, so at least one major browser is onboard with others likely to follow. The results look promising although, like all things in compression, there's rarely a clearcut winner. There's plenty of other places that perform comparisons, but the long and short is that per kilobyte. JPEG2000, JPEG-XR and WebP deliver considerably better images than JPEG and (often) PNG, and it doesn't take too much effort. I'd recommend reading the following, David Walsh's WebP images and Performance and playing with the incredibly nifty a format comparison tool. All three image formats do not require any purchases although JPEGXR is the most cumbersome of the formats on macOS. What you need: - A high-quality source file (preferably lossless) - JPEG-XR: Microsoft provides a free photoshop plugin that can be downloaded here or nab XnView - WebP: WebPonzie provides the most painless and easy to use solution. XnView also has WebP support. A small company, Telegraphics, also provides a webp plugin. I had minor issues with it. - JPEG2000: Photoshop, Preview both have native support. XnView also has JPEG2000 support. (I've had the best results using Apple's Preview for whatever reason) Since there isn't a excellent singular solution, you'll need to open the high quality source image and open the image in the listed programs. Photoshop can handle exports to all three formats (with the aforementioned plugins) making it the closest to a one-stop solution, followed by XnView (which can be cumbersome). With JPEGXR's fate in future Edge iterations hanging in the wind, WebPonzie + Apple's Preview is completely viable. ImageMagick is the other one stop solution, and it's a CLI utility, quite powerful and fast although occasionally a little obtuse with the flag options for each file format to toggle features like lossless or target sizes. Depending on your terminal comfort, this may be the easiest method. Getting start however is pretty straight forward. The easiest way is to use Homebrew, a package manager for mac CLI utilities. If you have Homebrew already installed, run the following commands in your terminal (install both ImageMagick and webp), otherwise go to the afformentioned link: Imagemagick includes several CLI utilties, but the one we're concerned with is convert. Converting a file to webp looks like the following: The same goes for JPEG2000 and JPEGXR it requires using the correct extension. Tweaking the quality takes tinkering for each format. Adding multi-format support doesn't require much, just a picture element. Below is an example of a simple picture. This can be mixed with srcset. Really, that's all it takes. Notably on macOS previewing JPEG-XRs is a pain and on it's way out, whereas at least WebP can be previewed in Chrome. It takes Photoshop or opening a JPEG XR in XnView then going to the actions tab, applying a filter so you can view the original JXR file (it's cumbersome). I created a CodeKit script to automate WebP and JPEG2000 and CLI bash script to convert images to WebP / JPEG200 / MozJPEG using ImageMagick. At the very least, you'll be providing better images at the same file sizes as a normal JPG. JPEG XR likely on the way out, even if its still supported by Edge. JPEG 2000 will never be embraced by the industry at large. WebP has a lot of traction these days. HEIF and AV1 though might soon replace WebP. JPEG isn't going anywhere, anytime soon. - Jan 21, 2019 - added ImageMagick information - Jan 14, 2019 - rewrite of about 1/2 the content, restructured and added information about future formats. - Aug 17, 2017 - Correction on JPEG 2000 and SRCset - Jul 26, 2017 - Original article published
OPCFW_CODE
Over the last few years there has been rapid adoption of the public cloud primarily propelled by the following: - Emerging technologies such as Docker containers & Kubernetes - Increased appetite for cloud native applications - Increased need to modernize monolithic applications using microservices etc. More and more enterprises are embracing a multi-cloud strategy for a variety of reasons such as: - Avoiding vendor lock-in - Cost savings etc The above trend has created a new set of problems for the multi-cloud administrators and auditors. For example, with respect to managing authorization policies, the administrator has to now understand the different tools and suitably configure authorization policies. This can get very complicated for the following reasons. - The user interfaces/APIs are generally very cloud vendor-specific. - There is no consistency in the terminology and representation of any given resource. - The operations that can be performed on the resources may not be the same, and even where they are the same, the operations may be named differently. Also the granularity of the operations that can be performed may diverge significantly such that consistent separation of duties may not be achievable or overly complex to configure correctly. Dealing with a myriad of tools and more importantly the silo’d nature of those tools and the inability to having a common or consistent set of authorization policies in such environments would ultimately result in poor security configuration of the multi-cloud environment and hence can become an easy target for exploitation. In this article we highlight some of the key concepts and themes that are fundamental to HyTrust’s approach to securing multi-cloud environments. These are being implemented in our flagship HyTrust CloudControl product version 6.0 that is currently in early access and slated to be generally available in early 2019. Key Concepts and Themes First and foremost, security starts with visibility of the assets in a given enterprise. You cannot secure what you are not aware of. Today, in a multi-cloud enterprise, one has to use the respective cloud vendor-specific silo’d consoles to get perspective of the inventory. HyTrust CloudControl 6.0 provides a comprehensive view of all the resources based on the following principles: - Unified, consistent and normalized view of all resources in a multi cloud environment using an abstracted inventory data model. Today, one has to log into the AWS console to view the AWS EC2 instances and to VMware Virtual Center to view the vSphere VMs etc. - End-to-end view of the related resources and their context. For example, it is very important to know if a container is running on a VM or on bare metal so that the related resources could be properly secured. - Consistent and consolidated view of all the audit logs for the various operations performed on the resources in the multi cloud. Today, one has to look at Cloud Trail logs for AWS operations and VirtualCenter logs for vSphere events. 2. Unified Policy In a multi-cloud world, workloads are likely to move from one public cloud to another and it is very important to maintain a consistent security posture. Today, configuring security policies across multi-cloud requires deep understanding of the intricacies of the respective cloud platform. The heterogeneous nature of the cloud platforms makes it very difficult to configure consistent policies. HyTrust CloudControl 6.0 provides a single pane of glass for configuring various security policies across a multi-cloud environment. For example, when it comes to configuring access control policies, HyTrust CloudControl provides a notion of abstracted roles that are made up of abstract operations. For example, there could be an abstract role called VM_User_Role that would be made of the following abstracted operations: Such abstracted roles would be suitably provisioned to target platforms using their respective Identity & Access Control APIs. For example, for AWS suitable managed policies would be generated in JSON and provisioned to AWS IAM. So with HyTrust CloudControl, administrators could centrally define access control policies without having the knowledge or understanding of the cloud platforms and instead rely on HyTrust CloudControl to do what’s needed to suitably provision policies onto the respective cloud platforms. Similarly one could centrally define and manage policies for other security controls such as configuration hardening, secondary approvals etc. 3. Security Automation To keep up with the dynamic nature of the cloud and the rapid pace at which DevOps is pushing new builds into production environments, security needs to be agile as well.. HyTrust CloudControl 6.0 has taken a declarative approach to security and various security policies could be defined as code thru YAML documents called Trust Manifests. The Trust Manifests could be authored through a rich intuitive UI or directly using a favorite editor such as vi or emacs. The Trust Manifest would be made of different sections each corresponding to a security policy type such as access control, configuration hardening, deployment control etc. Such Trust Manifests could be assigned at various levels such as AWS accounts or Kubernetes clusters or namespaces and security policies would automatically apply to new resources as and when they are created under them. To learn more about HyTrust approach to securing multi-cloud environments and our upcoming HyTrust CloudControl (HTCC) release 6.0, with support for securing AWS & Kubernetes, please watch to our recent webinar on this topic. Fill out this form to be one of the first organizations to try out HyTrust CloudControl 6.0 for Containers via our upcoming Early Access Program.
OPCFW_CODE
Is it possible to integrate a Wordpress blog with ASP.NET? Is it possible to integrate a Wordpress blog with ASP.NET If yes, then how? i also want to integrate wordpress blog with asp.net. if u got success then please provide me steps for that. Wordpress is written in PHP, and asp.net is in .Net. The server side language is different, what do you mean by integrate? If you are talking about having the main side (www.mysite.com) is ASP.NET, and the blog side ( blog.mysite.com) is Wordpress, then yes, it is possible. You just have to install the main side and blog side differently, and then use the IIS or Apache to redirect according to the sub-domain name. If you want the user record to synchronize the data between your ASP.NET and your WordPress blog, then yes, this is possible. But the process is quite elaborate. When a user registers an account at your ASP.NET, in addition to writing to your ASP.NEt database, you should also write to you WordPress database. I am not aware of any API exposed by WordPress to manipulate its underlying database, so worse come to worse you have to study the WordPress database schema, and maybe the WordPress PHP code, in order to learn how to do the user registration thing. The same goes for other operation. In short, you can do whatever you want to do, but it is very tedious. Thank you for giving me reply. My issue is: In my website (which is developed in asp.net) when user register in my website at that time one blog (which is created in word press automatically) is created for that user. After when ever user change anything in that blog he log in in my website and all the changes are affected in that blog. In short i want to manage word press blog from my asp.net application. Is this possible? I will really appreciate your help. Thank You. Ngu Soon Hui Thank You for giving reply.But I am using MS SQL Server 2005 database in asp.net and in wordpress My SQL database is used.So it's complicated. Is there any another way? Thank You. @Vatsal, not that I am aware of, my solution seems to be the only way @Graviton Where do I set up the redirection in IIS? I am a developer and we are using a similar system. We use linked servers in our Microsft SQL Server Management Studio. By setting up our linked server we are able to use our variables in MS SQL to affect our wordpress database and are able to control all aspects of our wordpress blog through MS SQL. Here is a great article about setting up a linked server: http://www.ideaexcursion.com/2009/02/25/howto-setup-sql-server-linked-server-to-mysql/ If you have any questions please comment:
STACK_EXCHANGE
I received a link (thanks, Jonathan) to this post titled “Subvert from Within: a user-focused employee guide” on the Creating Passionate Users blog. I’m not sure that I characterize these customer-focused focus points as subversive. (BTW, a good “top eight list” can be found on Peter Davidson’s be connected blog.) I do like the top five (IMHO) out of the blog… - Frame everything in terms of the user’s experience. - Speak for real users… not fake abstract “profiles.” (I’ll include “Put pictures of real users on your walls” under this one) - Get your hands on a video camera, and record some users. (I’ll put this under “Know your customer”) - Challenge user-unfriendly assumptions every day. - Don’t give up. These are all great. But there’s one point in particular I don’t agree with in the blog: “Be afraid of Six Sigma. Be very afraid.” That’s like saying “be afraid of power steering,” something that you know has its place in certain everyday experiences, but you may not understand exactly how it works. (Note: I have a conceptual knowledge of it, thanks to how stuff works.) There are aspects of continuous improvement, striving for quality and better processes that can help the organization in different parts of the company’s operations. For examples, look at how our own Ops & Tech Group use the Microsoft Office System Accelerator for Six Sigma, and John Porcaro’s note and overview on Six Sigma in Sales & Marketing. There have been some questions around process improvements at Microsoft and the impact to various teams, so let me put this rumour to rest: a squadron of black belts did not parachute in to Redmond, take the dev teams hostage and force them to read “Six Sigma for Dummies” while listening to Jeffrey Immelt sing his greatest hits. Now, we do have a number of examples where we’re seeing process and systems improvements that impact our products are everywhere through the company, notably in security and privacy. I was reminded of this in the ride back from the Company Meeting as I sat next to Glenn Fourie, our International Privacy Strategist. From a process stance, we’ve built the Security Development Lifecycle (aka “SDL”), the development process we’re using across the company and in the product groups. It helps us ship software to our customers and partners that has been created by devs trained in the art of the SDL, spec’ed and tested to be more resilient and secure. Steve Lipner (you can see him mug for the camera at the Secure Software Forum) and Michael Howard in the SBU documented the lifecycle, which also includes our introspective look at products in something called the Final Security Review (affectionately know internally as the FSR). In order to get approval for release, software products must go through a detailed review. In the FSR, we look at whether or not the product is ready to release to our customers and partners before we get to the release. In his blog, Soma covered the various phases of the SDL in VS2005. The FSR not only looks to see whether the code is ready for release, it also helps us determine the origin of any issues through RCA and (if needed) prevent them from happening again in the future (through our engineering and security training curriculums, dev/ test/ spec or other process improvements).
OPCFW_CODE
const Object = global.Object; const Array_isArray = Array.isArray; const Number = global.Number; const Reflect_getOwnPropertyDescriptor = Reflect.getOwnPropertyDescriptor; const Reflect_setPrototypeOf = Reflect.setPrototypeOf; const Reflect_getPrototypeOf = Reflect.getPrototypeOf; const Reflect_get = Reflect.get; const Reflect_apply = Reflect.apply; const Reflect_set = Reflect.set; const Reflect_defineProperty = Reflect.defineProperty; const has = (object, key) => { while (object) { if (Reflect_getOwnPropertyDescriptor(object, key)) return true; object = Reflect_getPrototypeOf(object); } return false; }; const set = ($value0, value1, $$value2, $$value3, access, membrane) => { if ($value0 === null || (typeof $value0 !== "object" && typeof $value0 !== "function")) throw new TypeError("Reflect.set called on non-object"); while ($value0) { const $descriptor = Reflect_getOwnPropertyDescriptor($value0, value1); if ($descriptor) { if (Reflect_getOwnPropertyDescriptor($descriptor, "value")) { if (!$descriptor.writable) return false; break; } if ($descriptor.set) { Reflect_apply($descriptor.set, $$value3, [$$value2]); return true; } return false; } $value0 = Reflect_getPrototypeOf($value0); } const $value3 = membrane.clean($$value3); let $descriptor = Reflect_getOwnPropertyDescriptor($value3, value1); if ($descriptor && !Reflect_getOwnPropertyDescriptor($descriptor, "value")) return false; if (!$descriptor) { $descriptor = {__proto__:null}; $descriptor.writable = true; $descriptor.enumerable = true; $descriptor.configurable = true; } if (Array_isArray($value3) && String(value1) === "length") $descriptor.value = access.release(membrane.clean($$value2)); else $descriptor.value = $$value2; return Reflect_defineProperty($value3, value1, $descriptor); }; const get = ($value0, value1, $$value2, access, membrane) => { if ($value0 === null || (typeof $value0 !== "object" && typeof $value0 !== "function")) throw new TypeError("Reflect.get called on non-object"); while ($value0) { if (Array_isArray($value0) && String(value1) === "length") return membrane.taint($value0.length); const $descriptor = Reflect_getOwnPropertyDescriptor($value0, value1); if ($descriptor) { if (Reflect_getOwnPropertyDescriptor($descriptor, "value")) return $descriptor.value; if ($descriptor.get) return Reflect_apply($descriptor.get, $$value2, []); return membrane.taint(void 0); } $value0 = Reflect_getPrototypeOf($value0); } return membrane.taint(void 0); }; exports.Get = get; exports.Set = set; exports.TameValueToTameObject = ($value, message, access, membrane) => { if ($value === null || $value === void 0) throw new TypeError(message || "Cannot convert undefined or null to an object"); if (typeof $value === "object" || typeof $value === "function") return $value; if (message) throw new TypeError(message); return access.capture(Object($value)); }; exports.TameDescriptorToTameValue = (boolean, $descriptor, access, membrane, sandbox_Object_prototype) => { if ($descriptor === void 0) return void 0; const $object = {__proto__:null}; if (Reflect_getOwnPropertyDescriptor($descriptor, "value")) { $object.value = boolean ? membrane.taint(access.capture($descriptor.value)) : $descriptor.value; $object.writable = membrane.taint($descriptor.writable); } else { $object.get = membrane.taint($descriptor.get); $object.set = membrane.taint($descriptor.set); } $object.enumerable = membrane.taint($descriptor.enumerable); $object.configurable = membrane.taint($descriptor.configurable); Reflect_setPrototypeOf($object, access.capture(sandbox_Object_prototype)); return $object; }; exports.TameValueToTameDescriptor = (boolean, $value, access, membrane) => { if ($value === null || (typeof $value !== "object" && typeof $value !== "function")) throw new TypeError("Property description must be an object"); const value = access.release($value); const $$value = membrane.taint($value); const $descriptor = {__proto__:null}; if (has($value, "value")) { $descriptor.value = get($value, "value", $$value, access, membrane); if (boolean) { $descriptor.value = access.release(membrane.clean($descriptor.value)); } } if (has($value, "writable")) $descriptor.writable = value.writable; if (has($value, "get")) $descriptor.get = access.capture(value.get); if (has($value, "set")) $descriptor.set = access.capture(value.set); if (has($value, "enumerable")) $descriptor.enumerable = value.enumerable; if (has($value, "configurable")) $descriptor.configurable = value.configurable; return $descriptor; }; exports.TameValueToTameArguments = ($value, access, membrane) => { if ($value === null || (typeof $value !== "object" && typeof $value !== "function")) throw new TypeError("CreateListFromArrayLike called on non-object"); const $$value = membrane.taint($value); const value = access.release($value); const $arguments = {__proto__:null}; $arguments.length = value.length; for (let index = 0; index < $arguments.length; index++) $arguments[index] = get($value, index, $$value, access, membrane); return $arguments; };
STACK_EDU
ABAPGit use survey Do you use ABAPGit at your company? I'd like to add a list of real-world users, but I don't want it to start with a single company. If you are using ABAPGit in your ABAP development process and it's OK with you to have your company listed in the ABAPGit project, can you chime in here and once we have 5-10 users I'll submit a pull request? Count Yelcho Systems Consulting in ( that’s me ) and these other customers of mine - Sword Holdings and Inchcape Australia. I also have 2 other customers that wouldn’t want to be named that use it too. Of course: RheinEnergie AG, Cologne, Germany (currently only "leaching") + SE38 IT-Engineering, Neuss, Germany (contributing) We have used it for Queensland health however only in offline mode right now. But have big plans to use it for ci together with pipelines and selenium testing At emineo we use abapGit online repositories for product development. Other scenarios are offline repositories for custom development and migration projects. We are using ABAPGit for our ABAP SDK for Azure. So far it has been used at a few customers and in quite a few codejams. I have mostly been in the UI5 fiori world for several years and have preaching git there until it was built into webide. (unfortunately one client was still using SVN but source control at least ) Recently I have a little more say in Architecture and will be doing an ABAPGit poc for people in the abap stack with of course abapgit. Yes we use ABAPGit at [www.pmconseil.com](Progress Management), mainly for internal tools that we reused on every projects. Tricktresor uses abapGit ;) Yes. some teams on SAP IBSO We use abapGit at CQSE for development - not yet the full online experience with feature branches etc., but we are moving towards such a development model. We use abapGit at objective partner for backup of coding and for version control but without feature branches etc. I use ABAPGit most of the time for my opensource solutions. Talking about clients, adoption here in Brazil is still a bit slow, but I've already implemented it along with ABAPLint and ABAPOpenChecks Should mention that all of my customers use abapGit - they just don't all know it 😉 Haufe Group using abapGit for installing ABAP open source and version control. We are also setting up abapGit to sync our internal frameworks over the different system landscapes. My company (percept ltd) is using ABAPGit for ABAP developments for its customers. A use case is mostly backup of coding and also deploying developed ABAP objects to SAP systems. We started within the last weeks with abapGit. It’s a greate tool. We use abapGit for decentralized ABAP development powered by Docker + OpenStack in SAP Labs Czech Republic Still have the vision to use this as 12h bachup of our Z-Code. Issue are cipherSuites somehow SAPserver <-> GitLab ... can't get it working. @TimoJohn use abapGitServer? This is of cause not a off site/system backup, but you will get the visibility as to what is happening in the system. Closing this issue, list can be edited via pull requests, https://docs.abapgit.org/other-where-used.html
GITHUB_ARCHIVE
How to display all classes that have no definitions in CSS files? Is there a tool to display/highlight all elements which have certain classes defined, but there are no css rules for this classes defined in css files? For example I have html code like: ... <div class="class_one">Some text for class one</div> <div class="class_two">Some text for class two obviously</div> ... And in .css files we have: ... .class_one { color: red; } .class_three { color: magenta; } ... In this case, if need to know all classes with no definitions inside css, I should get that "there are no class definitions for class_two". Also I should point that this tool (or whatever) shouldn't be online since I do my projects using local LAMP bundle (MAMP Pro, in my case). I hope my english isn't so bad :) The only OOTB solution I have came across is this one: http://unused-css.com/ Though it has limitations, obviously the site has to be online. But the main idea is clearly described in the schematics over in that site. You need: Collect all used classes/ids in HTML/JavaScript Collect all defined selectors in CSS Cross both lists and see what is left Although that seems like a straightforward task, sometimes there is a different CSS for every page, or the CSS is rendered dynamically, etc. Edit: To collect all the ids and classes I would run these regexps on files: <(.*)class="(.*)"(.*)> <(.*)id="(.*)"(.*)> (Tested on http://regexpal.com/) With notepad++ (or anything else that can come in handy while searching for patterns), I would collect the total set of items that are present in my HTML (possibly modify it for javascript too). Then I would collect the matched CSS classes into one regexp and match it against my CSS to see what's missing. How do you list the first part? (list of classes btw, there are no "selectors" in HTML) Well, you go through the HTML or Javascript by hand or tool and parse all possible occurences of those classes or ids: all the $('...'), getElementById's, class="...", id="...", etc. As I can see from the given website description, what it does it removes unused css selectors, but I need to do the opposite - I need to know all classes, that do not have definitions inside css. Ooooooh, got it :) I'll improve my answer. How to display all classes that have no definitions in CSS files? This is quite easy: parse all html elements, and populate a list with all entries of the class attribute. After this, parse the CSS and populate a second list. Remove all entries of the CSS list from the first list and you have your classes without CSS declaration. Note that this gets much more complicated if you use dynamically created class names with JavaScript. Is there a tool to display/highlight all elements which have certain classes defined, but there are no css rules for this classes defined in css files? Haven't heard of any tool yet. This will do if I do not have any elements with complicated child elements, for which I might have selectors like .class_one table.error tr:first-child you can use jquery plugin like- http://csslint.net/about.html OR you can also enter your css and found errors and warnings.- http://csslint.net/index.html "CSS Lint is a tool to help point out problems with your CSS code" but in my case it's not about my css code, it's about html code, which may have some elements with non-existing classes.
STACK_EXCHANGE
hidden-definition-finder.rb `write_constants': Your source can't be read by Sorbet. (RuntimeError) disclaimer: occurs in a kind of legacy project. Gem versions: sorbet (0.5.5784) sorbet-rails (0.7.0) Input srb rbi update Observed output Generating /var/folders/v_/xmfm5zgd58j5l108ftb89s9r0000gn/T/d20201106-75025-ok2ri/reflection.rbi with 16579 modules and 264 aliases Printing your code's symbol table into /var/folders/v_/xmfm5zgd58j5l108ftb89s9r0000gn/T/d20201106-75025-ok2ri/from-source.json /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/lib/hidden-definition-finder.rb:119:in `write_constants': Your source can't be read by Sorbet. (RuntimeError) You can try `find . -type f | xargs -L 1 -t bundle exec srb tc --no-config --error-white-list 1000` and hopefully the last file it is processing before it dies is the culprit. If not, maybe the errors in this file will help: /var/folders/v_/xmfm5zgd58j5l108ftb89s9r0000gn/T/d20201106-75025-ok2ri/from-source.json.err from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/lib/hidden-definition-finder.rb:47:in `main' from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/lib/hidden-definition-finder.rb:38:in `main' from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/bin/srb-rbi:232:in `block in make_step' from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/bin/srb-rbi:121:in `init' from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/bin/srb-rbi:196:in `main' from /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5784/bin/srb-rbi:237:in `<main>' Expected behavior I expect the command finishes without breaking. I was able to run this a few weeks ago, the only difference is I updated a few gems, but sorbet remained the same. the error file has this content: /Users/benja/.rvm/gems/ruby-2.4.10/gems/sorbet-0.5.5939/bin/srb: line 46: 42467 Segmentation fault: 11 "${sorbet}" "${args[@]}" upgrading sorbet solved it
GITHUB_ARCHIVE
This article is the second part of a three-part series on modernizing IT service management (ITSM). The first, on how a modernization and digital transformation in IT can move IT to more of a value basis vs. cost basis, can be found here. Now we delve into some of the facets of Modern Service Management and why ITSM needs to be modernized. Modern Service Management Modern Service Management, coined in early 2016 by Microsoft Enterprise Services, is not a framework, certification/training model, or a means to generate revenue. Nor is it a marketing campaign from Microsoft. In fact, at the time, it was more of an internal call to arms to assist customers in modernizing IT, in support of Microsoft’s former “Cloud First, Mobile First” strategy, and more so, our “Intelligent Cloud/Intelligent Edge” strategy. With Modern Service Management defined as: “A lens intended to focus service management experts, around the globe, on the most important outcomes that evolve our customers from legacy, traditional IT models to an easier, more efficient, cost effective, and agile service structure.” Modern Service Management is meant to focus on people, collaboration, and relationships, not just technology and processes. The table below highlights some of the key differences between traditional IT and modern IT: |Design for Success (HA/Redundant) |Design for Failure (Resilient) |In Documents, Optimized, Redesigned |Self Service, Knowledge, Low Friction, Automated |Isolated, Manually Initiated |Systemic, Triggered, Automatic |Element, Fault Focused |Service, End-to-End Capability Focused |Service Desk / Contact Center |Customer Experience / Self Service |N-1 or Older |Configuration / Asset Management |Discovered / Manual Configuration |Prescribed, Declarative, Automated |App-Aligned, Artificial Intelligence |Failure Priority Time Factor |Mean time to Repair |Mean time to Detect, Identify, Remediate / Eliminate Modern Service Management prescribes that modern IT has: - Evolved to disintermediate itself between the organizations they serve and most-often cloud based services. This actually drives more value from IT, resulting in greater investment in IT. Accomplished when IT becomes the enabler, not disabler, of digital transformation. - Established continuous service delivery with shorter, more frequent release cycles. Whereby incremental value is experienced sooner, and errors and defects are caught earlier and fixed sooner in release. - Assumed failures will occur (beyond just technical ones) and services, infrastructure, and applications are built stateless, resilient, and automatically responsive to the best extent possible to react to these failures. - Identified relationships and dependencies in services end-to-end, across technologies or lifecycle silos or technology domains. Service Maps are necessary to address the monitoring and remediation required by DevOps. - Eliminated manual efforts where possible with processes automated to reduce friction, increasing the flow of value and enabled through self-service and knowledge. And treating manual work efforts as a “bug” often changes the acceptance of manual work effort as “normal.” - Made automation systemic, triggered automatically rather than manually started by a human being. - Broken monitoring into a monitoring service and what gets monitored end to end by the DevOps teams. End-to-end monitoring encompasses more than just whether a service is up or down, but whether the intended service is capable of being completed by the customer. - Evolved support such that it’s focused on customer care and self-sufficiency rather than just quick closures and process efficiencies. - Updated and maintained current versions of applications, software, and services as close to current as possible. With a preference to “mixed mode” operations where N and N+1 can co-exist and be supported in a software factory model (rotating support teams). - Evolved configuration and asset management to be more declarative, with prescribed and automated asset and configuration management rather than discovered or manually maintained. - Leveraged modern artificial intelligence (AI), bot, and machine learning capabilities to provide support that considerably offloads manual, interrupt driven, and costly service desk calls. - Established goals and monitoring capabilities that have reduced time to detect and time to identify defects and errors, which help to reduce time to remediate and resolve as well as eliminate defects within services. Modern IT is in the cloud While this modern approach to IT and Modern Service Management is possible on-premises, it’s more cost-effective in the cloud because many of the patterns and practices are, or can be, automated. Which is why most digital transformation is occurring because of the capabilities the cloud provides. Digital transformation, via the cloud, is possible because: - Business must innovate to compete (in their respective industries) - Devices are inexpensive (think IoT) - Compute is powerful and inexpensive, and now more secure in the cloud than on-premises (and easily automated) - Storage is inexpensive (and easily automated) - Internet (and network) connectivity is prevalent and inexpensive (and easily automated) - Cloud enables transformational stuff to be at everyone’s fingertips, e.g. garage developers now have access to machine learning and AI through the cloud - Agile/DevOps makes it all happen faster (and easily automated). Modern Service Management versus ITSM There are many differentiated patterns and practices from those of traditional ITSM. We are not by any means saying that traditional ITSM is wrong. It just hasn’t evolved in several areas: - The ITSM industry sought to be “technology agnostic” which is OK when all the players in the market have comparable technologies. Which is not the case. - Leveraging capabilities and practices available from modern cloud services that change or eliminate the need for legacy and manual patterns and practices. For example, to understand something like continuous flow (a facet of Modern Service Management), we use analogies such as the following to illustrate subtle and not so subtle differentiation between Modern Service Management and traditional ITSM. Consider traditional ITSM change management, which has often been incorrectly implemented. Nowhere in “the books” does it say every change must go through the change advisory board (CAB). Like a four-way intersection. With various administrative stops, approvals, and oversights. As well as there now being more technology where perhaps it isn’t needed. Like a change always arriving and having to process through a CAB, a vehicle will often have to stop even when there isn’t traffic. Contrast this with Modern Service Management – which promotes continuous flow, “shift left”/DevOps approaches to security and service management controls, and automated quality and monitoring. Which is more like a roundabout or turning circle. This allows for: - The continuous flow of traffic when there isn’t any other traffic - Autonomous decision making - Less technology being held up thanks to pre-established boundaries on how traffic should travel. Change management should have always been about managing risk and balancing and promoting agility, not promoting delay and enforcing control. Why modernize IT? Perhaps the best way to answer this question is to ask the inverse. What happens if IT doesn’t modernize? If you were trying to make IT less relevant to its organization what would do to accomplish that? It is our belief that: - The world has changed. Most employees of organizations are considerably more educated when it comes to information technology, and as current generations move into leadership, this will be more assumed than in the past. - The technology is simpler to implement and use, and many in business roles know this. - IT budgets will continue to shrink or stay flat, thus rendering IT into a broker role and owner of directory and governance (either painfully or “planfully”). - IT relevance will continue to shrink unless it’s able to demonstrate, not just communicate, its true value and contribution to the business bottom line. This happens by brokering services that cannot possibly be delivered internally. We hope that this article on Modern Service Management has been helpful. Please look out for the third and final article which outlines and highlights how the many facets of Modern Service Management are accomplished and implemented – The 12-Step Journey to Modern Service Management John Clark is a Cross-Domain Solution Architect (CDSA) in Microsoft Enterprise Services as well as the Worldwide Modern Service Management Community Lead and former Subject Matter Expert. John was formerly an ITSM Solution Architect for Microsoft Enterprise Services as well and continues to incorporate modern ITSM in his new role. As a Microsoft CDSA, John is responsible for shaping opportunities with customers leveraging and implementing cross-domain Microsoft solutions that incorporate Azure, Office 365, Dynamics 365 and Microsoft 365. John has received several honors in recent years, including being selected for the WW ITSM Communities SME Award, Microsoft Sr. Technology Leadership Program (2014), and the Americas Gold Club (2016). He is also a past president of the Ohio Valley itSMF USA LIG and a former LIG of the Year recipient. Kathleen is an Architect at Microsoft for the Cloud and Infrastructure Management Center of Excellence, focusing on developing solutions for management and adoption of private, hybrid, public clouds, and DevOps. Kathleen’s strengths lie in understanding what is needed to adopt, support and operate solutions either on premise or in the cloud. She has over 21 years of experience in IT and has worked in both IT operations and consulting. Prior to her Architect role, Kathleen was a consultant for Microsoft Consulting Services Canada who focused on assisting customers with the adoption of Microsoft products, service management, and private cloud. Kathleen leads a worldwide community of peers who have transformed traditional ITSM into modern service management. Kathleen achieved her ITIL Foundation certification in 1998, and is currently an ITIL v3 Expert and is actively tweets about modern service management to get people to rethink traditional service management fundamentals and make them more modern and actionable with enabling technologies. Kathleen co-authored a MS Press book, Optimizing Service Manager, and was a contributing author to the System Center 2012 Service Manager Unleashed book. She is an Edutainer and has spoken at many internal and external Microsoft/industry events.
OPCFW_CODE
[ofa-general] Memory registration redux rdreier at cisco.com Mon May 18 14:15:11 PDT 2009 > When our memory hooks tell us that memory is about to be removed from > the process, we unregister all pages in the relevant region and remove > those entries from the cache. So the next time you look in the cache > for 0x3000-0x3fff, it won't be there -- it'll be treated as So you want the registration cache to be reference counted per-page? Seems like potentially a lot of overhead -- if someone registers a million pages, then to check for a cache hit, you have to potentially check millions of reference counts. > > How does 0x1000 to 0x3fff get registered as a single Memory Region? > > If it is legitimate to free() 0x3000..0x3fff then how can there ever > > be a > > legitimate reference to 0x1000..0x3fff? If there is no such single > > reference, > > I don't see how a Memory Region is every created covering that range. > > If the user creates the Memory Region, then they are responsible for > > not > > free()ing a portion of it. > Agreed. If an application does that, it deserves what it gets. Hang on. The whole point of MR caching is exactly that you don't unregister a memory region, even after you're done using the memory it covers, in the hope that you'll want to reuse that registration. And the whole point of this thread is that an application can then free() some of the memory that is still registered in the cache. > Per my prior mail, Open MPI registers chucks at a time. Each chunk is > potentially a multiple of pages. So yes, you could end up having a > single registration that spans the buffers used in multiple, distinct > MPI sends. We reference count by page to ensure that deregistrations > do not occur prematurely. Hmm, I'm worried that the exact semantics of the memory cache seem to be tied into how the MPI implementation is registering memory. Open MPI happens to work in small chunks (I guess) and so your cache is tailored for that use case. I know the original proposal was an attempt to come up with something that all the MPIs can agree on, but it didn't cover the full semantics, at least not for cases like the overlapping sub-registrations that we're discussing here. Is there still one set of semantics everyone can agree on? More information about the general
OPCFW_CODE
We are working to provide a community of, and for, interested product developers. Community feedback is encouraged. We'll do our best to find answers to your questions, give you tips, and get you going. To join our community, you must do the following: 1. If you are not a member of dev.java.net, click on the Register link in the upper right hand corner of this page to register your user name and create a Java.net user ID. 2. Click on the link below to join the Open MQ project. Specify the role you are interested in and submit your request though at this time we are only accepting Observer members. Be patient! You will receive an email when your request is approved. 3. To post comments about the product, how it is used, problems, etc. please write to firstname.lastname@example.org. We'll do our best to get you a response. 4. Joining the Open MQ project does not automatically subscribe you to the MQ mail list alias. If you want to receive announcements, or join in the conversation (or just monitor how others are getting along) you are welcome to join any mailing list on this page. We'd love to hear from you! You will find more community contribution options at this page. If you are using the source code -- having problems with the build instructions, or just wondering why the code is written the way it is, send us a note on email@example.com. Anyone, Open MQ project member or not, can contribute to the user community. You can do this by posting to the e-mail lists (firstname.lastname@example.org or email@example.com). We strongly encourage all members to join these alias lists, as well as the Open MQ Forum. These lists are moderated to reduce SPAM. If you are not a member of the alias, your message will be reviewed prior to posting. This is only to prevent unwanted SPAM e-mail. These contributions are extremely valuable and help the development community make better decisions about what features are working well, and where we can focus our attention better. Anyone can post any issues (bugs, product requests, Etc.) using issue tracker. To do this, click the Issue Tracker link on the navigation bar to the left. In addition, you might be able to find answers to your questions by searching or browsing through the community feedback at the Java Message Queue forum. To Join this forum, you will need a separate Software Developer Network user ID. Once you have that ID, you can subscribe to the Open MQ forum, or any of several hundred technical and community forums that are hosted by Sun. If you are interested in contributing some source-code, or anything that you would like to become part of the community release of Open MQ, please read further... We welcome contributions, but, at this time we are not able to take direct contributions to the source code, via our on-line repository. We are happy to accept your contributions and request that all contributors complete the Oracle Contributor Agreement. This protects your rights as well as ours, and other users of this community project. Before submitting anything, please contact us via the developer alias first so that we can give you further details about how to go about submitting your addition. If you are interested in this, we suggest you review the contributor details (see the section about code and samples) in the GlassFish Server Open Source Edition project for more details. Since MQ is a rather complex product and, we have many customers who rely on continued reliability and compatibility, we have a rather prescriptive process for developing new features. I won't describe that completely here, but I will offer a couple of documents which can help you see what types of considerations we make, before implementing anything more elaborate than a bug-fix. We generally try to describe what we are going to do, with the following documents. Have a look: Onepager -- HTML Functional Specification -- HTML -- In this document are links to existing submissions that are on-file with the Open Solaris ARC. We use the "You are responsible for considering everything that's asked about, in these documents. You can exersize your good judgement and decide which sections are required and which can be ignored. " We assume anyone who'd want to make a contribution would do so in a professional and complete manner. Just because you don't fill in the section, doesn't mean you aren't responsible for knowing the answer. In general, the Onepager is written first and then, we decide if the feature request / proposal should be implemented. If we decide to proceed, code development then gets underway (ok, it's not always this clean, but this is what we strive for). Once the feature is far enough along, and certainly before it's determined to be "feature complete," we create the Functional Specification. Between these two documents, it should be possible to schedule, monitor, and predict when the feature will be completed; it should be possible for a documentation writer to describe the feature (or, at least, interview the developer more effectively about what to write); and it should be possible to write Quality Engineering test development plans to ensure that the feature has been fully implemented and can be verified for some level of correctness. We are interested in your contributions. We do think that many of our community users will find that using IssueTracker is just fine to meet their needs. For those who do want to make contributions, please join us on the firstname.lastname@example.org alias.
OPCFW_CODE
M: Engineers Aren't Attending Career Fairs Anymore - e15ctr0n http://techcrunch.com/2014/09/11/corporate-america-your-future-engineers-arent-attending-career-fairs-anymore/ R: No1 The CEO of a company that promotes hackathons wants to try to convince the world that all forms of recruiting are old-timey except for hackathons. Take it with a grain of salt. R: dysfunction I don't see this article actually supporting the assertion that career fairs are declining. In the technical majors at my school (UMass Amherst class of '14), they were the first steps to getting internships and jobs for almost everyone who got them. Nearly every CS junior and senior, and many-to-most sophomores, went at least once, if not twice, per year. R: crazypyro This is my experience as a current student at an predominately engineering/technology university. Our career fairs are very large and almost all junior/seniors attend. Its been growing steadily every year as well. That's not to say that companies don't try to get a head start (for example, tons of companies are having lawn events and other informationals, which really just mean they want resumes from interested students, for a few weeks before the actual career fair), but the career fair is definitely the central employer-student communication venue. R: marssaxman Since when did they ever? Was this really a thing? It seems to have come and gone without affecting me or anyone I know. R: crazypyro They are very common at universities which, I think, was the focus of this article.
HACKER_NEWS
package g8 import ( "e8vm.io/e8vm/g8/ir" "e8vm.io/e8vm/g8/types" "e8vm.io/e8vm/lex8" ) func buildBasicArith(b *builder, ret, A, B *ref, op string) { if op == "%" || op == "/" { isZero := b.newCond() b.b.Arith(isZero, B.IR(), "==", ir.Num(0)) zeroPanic := b.f.NewBlock(b.b) after := b.f.NewBlock(zeroPanic) b.b.JumpIfNot(isZero, after) b.b = zeroPanic callPanic(b, "divided by zero") b.b = after } b.b.Arith(ret.IR(), A.IR(), op, B.IR()) } func binaryOpInt(b *builder, opTok *lex8.Token, A, B *ref, t types.T) *ref { op := opTok.Lit switch op { case "+", "-", "*", "&", "|", "^", "%", "/": ret := b.newTemp(t) buildBasicArith(b, ret, A, B, op) return ret return ret case "==", "!=", ">", "<", ">=", "<=": ret := b.newTemp(types.Bool) b.b.Arith(ret.IR(), A.IR(), op, B.IR()) return ret } b.Errorf(opTok.Pos, "%q on ints", op) return nil } func binaryOpUint(b *builder, opTok *lex8.Token, A, B *ref, t types.T) *ref { op := opTok.Lit switch op { case "+", "-", "*", "&", "|", "^", "%", "/": ret := b.newTemp(t) buildBasicArith(b, ret, A, B, op) return ret case "==", "!=": ret := b.newTemp(types.Bool) b.b.Arith(ret.IR(), A.IR(), op, B.IR()) return ret case ">", "<", ">=", "<=": ret := b.newTemp(types.Bool) b.b.Arith(ret.IR(), A.IR(), "u"+op, B.IR()) return ret } b.Errorf(opTok.Pos, "%q on ints", op) return nil } func binaryOpConst(b *builder, opTok *lex8.Token, A, B *ref) *ref { op := opTok.Lit if !A.IsSingle() || !B.IsSingle() { b.Errorf(opTok.Pos, "invalid %s %q %s", A, op, B) return nil } va, oka := types.NumConst(A.Type()) vb, okb := types.NumConst(B.Type()) if !(oka && okb) { b.Errorf(opTok.Pos, "non-numeric consts ops not implemented") return nil } r := func(v int64) *ref { return newRef(types.NewNumber(v), nil) } br := func(b bool) *ref { if b { return refTrue } return refFalse } switch op { case "+": return r(va + vb) case "-": return r(va - vb) case "*": return r(va * vb) case "&": return r(va & vb) case "|": return r(va | vb) case "^": return r(va ^ vb) case "%": if vb == 0 { b.Errorf(opTok.Pos, "modular by zero") return nil } return r(va % vb) case "/": if vb == 0 { b.Errorf(opTok.Pos, "divide by zero") return nil } return r(va / vb) case "==": return br(va == vb) case "!=": return br(va != vb) case ">": return br(va > vb) case "<": return br(va < vb) case ">=": return br(va >= vb) case "<=": return br(va <= vb) case "<<": if vb < 0 { b.Errorf(opTok.Pos, "shift with negative value", vb) return nil } return r(va << uint64(vb)) case ">>": if vb < 0 { b.Errorf(opTok.Pos, "shift with negative value", vb) return nil } return r(va >> uint64(vb)) } b.Errorf(opTok.Pos, "%q on consts", op) return nil } func unaryOpInt(b *builder, opTok *lex8.Token, B *ref) *ref { op := opTok.Lit switch op { case "+": return B case "-", "^": ret := b.newTemp(B.Type()) b.b.Arith(ret.IR(), nil, op, B.IR()) return ret } b.Errorf(opTok.Pos, "invalid operation: %q on %s", op, B) return nil } func unaryOpConst(b *builder, opTok *lex8.Token, B *ref) *ref { op := opTok.Lit if !B.IsSingle() { b.Errorf(opTok.Pos, "invalid operation: %q on %s", op, B) return nil } v, ok := types.NumConst(B.Type()) if !ok { // TODO: support type const b.Errorf(opTok.Pos, "typed const operation not implemented") return nil } switch op { case "+": return B case "-": return newRef(types.NewNumber(-v), nil) } b.Errorf(opTok.Pos, "invalid operation: %q on %s", op, B) return nil }
STACK_EDU
Comment on page TileDB enables concurrent writes and reads that can be arbitrarily mixed, without affecting the normal execution of a parallel program. This comes with a more relaxed consistency model, called eventual consistency. Informally, this guarantees that, if no new updates are made to an array, eventually all accesses to the array will “see” the last collective global view of the array (i.e., one that incorporates all the updates). Everything discussed in this section about array fragments is also applicable to array metadata. We illustrate the concept of eventual consistency in the figure below (which is the same for both dense and sparse arrays). Suppose we perform two writes in parallel (by different threads or processes), producing two separate fragments. Assume also that there is a read at some point in time, which is also performed by a third thread/process (potentially in parallel with the writes). There are five possible scenarios regarding the logical view of the array at the time of the read (i.e., five different possible read query results). First, no write may have completed yet, therefore the read sees an empty array. Second, only the first write got completed. Third, only the second write got completed. Fourth, both writes got completed, but the first write was the one to create a fragment with an earlier timestamp than the second. Fifth, both writes got completed, but the second write was the one to create a fragment with an earlier timestamp than the first. Illustration of eventual consistency The concept of eventual consistency essentially tells you that, eventually (i.e., after all writes have completed), you will see the view of the array with all updates in. The order of the fragment creation will determine which cells are overwritten by others and, hence, greatly affects the final logical view of the array. Eventual consistency allows high availability and concurrency. This model is followed by the AWS S3 object store and, thus, TileDB is ideal for integrating with such distributed storage backends. If strict consistency is required for some application (e.g., similar to that in transactional databases), then an extra layer must be built on top of TileDB Embedded to enforce additional synchronization. But how does TileDB deal internally with consistency? This is where opening an array becomes important. When you open an array (at the current time or a time in the past), TileDB takes a snapshot of the already completed fragments. This the view of the array for all queries that will be using that opened array object. If writes happen (or get completed) after the array got opened, the queries will not see the new fragments. If you wish to see the new fragments, you will need to either open a new array object and use that one for the new queries, or reopen the array (reopening the array bypasses closing it first, permitting some performance optimizations). We illustrate with the figure below. The first array depicts the logical view when opening the array. Next suppose a write occurs (after opening the array) that creates the fragment shown as the second array in the figure. If we attempt to read from the opened array, even after the new fragment creation, we will see the view of the third array in the figure. In other words, we will not see the updates that occurred between opening and reading from the array. If we'd like to read from the most up-to-date array view (fourth array in the figure), we will need to reopen the array after the creation of the fragment. Different views when opening the array in the presence of concurrent writes When you write to TileDB with multiple processes, if your application is the one to be synchronizing the writes across machines, make sure that the machine clocks are synchronized as well. This is because TileDB sorts the fragments based on the timestamp in their names, which is calculated based on the machine clock. Here is how TileDB reads achieve eventual consistency on AWS S3: - 1.Upon opening the array, list the fragments in the array folder - 2.Consider only the fragments that have an associated .okfile (the ones that do not have one are either in progress or not visible due to S3’s eventual consistency) .okfile is PUT after all the fragment data and metadata files have been PUT in the fragment folder. The above practically tells you that a read operation will always succeed and never be corrupted (i.e., it will never have results from partially written fragments), but it will consider only the fragments that S3 makes visible (in their entirety) at the timestamp of opening the array.
OPCFW_CODE
Help With Dual Boot Vista/xp And Raid 0 From Daemon Tools, mount the Vista DVD image and proceed to installation. I guess the problem was caused from the boot partition not being on the same parition as the initial XP install but am not sure. This is the second time I have tried to install Vista, the first was Vista x86 bit. Ensure you have the Vista DVD image emulated or in the DVD drive. this contact form You can use the BCDEdit tool to see what is the boot configuration by Vista. The problem is that I want to format the I: drive to get rid of VISTA all together and not loose any other information from any other drive. Vista loaded and showed my partitioned Raid 0 array but no operating system is shown. Hal.dll is still missing. What should I do? If all goes well, ill be running vista in no time! (infortunately by the time i heard about RC2 microsoft had already closed the download, 🙁 but no big deal i Reply Simon Kurash says: September 19, 2006 at 8:25 am I'm a stupid person and I didn't choose a different directory for Windows Vista RC1, I upgraded the version as it - Since then I can't boot into XP anymore since it can't locate the hal32.dll. - The question to my main problem is located on the last sentance of the last paragraph. - Could I just format the new 250g which has vista from the start up menu on my board? - Dual boot Windows 7 & XP on Raid 0 - No floppy drive Discussion in 'General Software' started by NdMk2o1o, Jul 3, 2010. - Any help to make this without losing any data? - When the computer rebooted, Vista began to uninstall itself. - Our forum is dedicated to helping you find support and solutions for any problems regarding your Windows 7 PC be it Dell, HP, Acer, Asus or a custom build. - Recently, I formatted my C:, and now my Vista stopped working. My XP works fine. I do, however, like Vista for everything else (mostly Office apps and internet, so the Aero interface is very convenient).So I guess the only opinion that I couldn't find to by any help would be greatly appreciated! please help Thanks Arun Reply Donna Newsome says: July 6, 2006 at 4:26 pm hi i can't defragg my PC defragmenter tells me degragmeneter disk engine is lost. It then asks me to hit alt-ctrl-delete to restart comp. Then installed windows 7 on the partition. Probably until I get my hands on one. Several functions may not work. Sim Site Rankings The AVSIM Staff Flight Simulation's Premier Resource! AVSIM is a free service to the flight simulation community. Installation & Setup no boot after enabling BIOS RAID0I just bought a refurb Dell XPS 420 w/ Q9300, running Vista Home Premium x64 and downloaded/installed an activated version of Win 7 I mounted vista_5728.16387.060917-1430_x86fre_client-lrmcfre_en_dvd.iso to my virtual F: drive using D-tools. All rights reserved.Make Tech Easier is a member of the Uqnic Network. But otherwise, you should just use the "fixmbr" method in the XP install CD through the repair environment. I assume I can just go into the BIOS and disable the nForce RAID controller. any solutions will be very appreciated ! I am going to install a second hard drive, but am not sure how to get Vista to recognize the second drive for an installation. The second HDD is SATA with 1st partition primary and active and size 40 GB, and other 2 logical partition. weblink at boot up i choose ‘earlier version of windows' but the screen blacks out and nothing happens. Reply Long Zheng says: November 23, 2006 at 4:03 pm @djlasky: Step 3. In fact, I am able to save other files in that particular partition. I have vista pre-installed on my system and I don't want to wipe out everything to install windows 7 beta, you know just to test it out. the boot selection menu still was there . Thanks in advance. navigate here I tried reinstalling ntoskrnl from the original file, but that did not work either. Now, I have installed Vista RC2 on this laptop before in a single OS configuration and it does work. Reply Long Zheng says: October 9, 2006 at 12:50 pm @Rey Flores: You can try one of three things, ranked from easiest to hardest. According to VBP Windows tried to boot from F:, but boot.ini, ntldr and ntdetect.com are on T:. I've made sure CD is first boot device in bios, and has always worked (this is how I installed vista)… now, I get the prompt to boot from CD (press any After the installation, you should see the black screen for you to select "Older version of windows" or "Windows 7" in the bootup screen. Make sure the XP CD is SP2 (or whichever service pack you're running), otherwise it may cause problems. In some cases, the existing drive letter could conflict with the CD-rom drive letter and that's why you can't see the partition when you boot up XP installer CD.Hope this helpsDamien Im currently formatting my hard drive. After a few attempts due to sata driver issues I finally got a smooth clean install of XP on a single HD. BSOD Help and Support Our Sites Site Links About Us Find Us Vista Forums Eight Forums Ten Forums Help Me Bake Network Status Contact Us Legal Privacy and cookies Windows 7 I really don't want to have to have a dvd in my computer everytime I want to turn it on. http://exomatik.net/help-with/help-with-vista-i-m-sure-i-m-still-infected.php Now in case you didn't know, Windows XP (older version) does not recognize SATA drives. Set the hard disk or RAID set with the boot manager as the second 2nd boot item. After a few attempts due to sata driver issues I finally got a smooth clean install of XP on a single HD. I have however discovered that if I leave the dvd with the RC1 ISO in my disc drive it manages to make it to the boot manager screen and it's all For virtual drive emulation, I recommend Daemon Tools. He said that if I install Windows XP in one of those new volume partitions that OS is not going to work, and if it does maybe I will lost my I have the Knack. ** If I haven't replied in 48 hours, please send me a message. I ran the custom install of Vista from within XP and after it rebooted I was never presented with a Bootloader screen to choose which OS to load. I have decided to rather wait until Vista is more stable and tried to uninstall Vista but with no luck. Reply Matt S. Then get rid of the entry for the T: XP in boot.ini (do this from the current XP just to be safe), delete the Windows folders from T: (but leave the Jul 3, 2010 at 10:20 PM #11 NdMk2o1o Joined: Apr 21, 2010 Messages: 4,256 (1.72/day) Thanks Received: 1,621 Location: Redditch, Worcestershire, England System Specs System Name: Hi5-3570k Processor: i5 3570k - Works for me but applying full-access permissions takes a while on all those files. FYI My current boot.ini on my C: drive looks like this: ; ;Warning: Boot.ini is used on Windows XP and earlier operating systems. ;Warning: Use BCDEDIT.exe to modify Windows Vista boot WinXP installed on the partition C: and now I install Win 7 in partition D:. If not, then just use the "fixmbr" command with your X64 CD. http://www.litepc.com/xplite.html Last edited: Jul 3, 2010 Jul 3, 2010 at 7:15 PM #3 Loosenut Joined: Jan 14, 2010 Messages: 924 (0.36/day) Thanks Received: 221 Location: Granby, Qc. Remove two files (Boot.BAK & Bootsect.BAK) on your XP drive's root folder (C:), these were backup files of your previous bootloader, now no longer useful. My questions are the following one where to delete the old reference of Vista on the boot menu. Reply OkComp says: October 24, 2006 at 1:13 pm Hi, I tried to upgrade the OS and at the last step of the installation I got a message saying that it
OPCFW_CODE
""" A module that produces 'nice' random math objects, such as vectors, matrices, polynomials, etc. """ # import numpy as np from numpy import random as nr from sympy import * from typing import Tuple, List MatSize = Tuple[int, int] Vectors = List[str] def gen_matrix_rank(size: MatSize, rank: int, max_denom: int = 1, max_val: int = 3): """ Generates a random (size[0])x(size[1]) matrix with given rank. """ A, piv = zeros(size[0], size[1]), nr.choice(rank, size[1]) for i in range(rank): A[i,piv[i]] = 1 for j in range(piv[i]+1, size[1]): A[i,j] = nr.choice([a for a in range(-1*max_val + 1, max_val)]) for i in range(size[0]): j = nr.choice(size[0], min(size[0],3), replace=False) for k in j: if k!=i: A = A.elementary_row_op(op="n->n+km", row=i, row2=k, k=nr.choice(max_val)*nr.choice([-1,1])) return A def gen_diagonal_matrix(size: int, det, max_denom: int =1): """ Generates a random diagonal (size)x(size)-matrix with given determinant. """ A, a, idx = eye(size), nr.choice(size), nr.choice(size, min(2*int(max(size,3)/3),size), replace=False) A = A.elementary_row_op(op="n->kn", row=idx[0], k=det) for x in range(int(size/3)): aux = nr.choice(3,2,replace=False) i, j = 3*x + aux[0], 3*x + aux[1] if det: A = A.elementary_row_op(op="n->kn", row=i, k=max_denom) A = A.elementary_row_op(op="n->kn", row=j, k=Rational(1,max_denom)) else: A = A.elementary_row_op(op="n->kn", row=i, k=0) return A def gen_int_triang_matrix(size: int, det, upper=True, max_val: int =7, max_denom: int = 1): """ Generates a random triangular (size)x(size)-matrix with given determinant by taking linear combinations with integer coefficients of the rows of a diagonal matrix. """ A, idx = gen_diagonal_matrix(size, det, max_denom), nr.randint(size) #A = A.elementary_row_op(op="n->kn", row=idx, k=det) for i in range(size): for j in range(i+1,size): A = A.elementary_row_op(op="n->n+km", row=i, row2=j, k=nr.choice(max_val)*nr.choice([-1,1])) # A[i,j] = nr.choice(max_val)*nr.choice([-1,1]) if not upper: A = A.transpose() return A def gen_sq_matrix(size: int, det, max_val: int =3, max_denom: int =1): """ Generates a random (size)x(size)-matrix with given determinant produced by taking linear combinations with integer coefficients of the rows of a triangular matrix. """ A = gen_int_triang_matrix(size, det, upper=nr.random([0,1]), max_val=max_val) x = nr.choice(size, 2*int(size/3), replace=False) for i in x: A = A.elementary_row_op(op="n->kn", row=i, k=-1) perm = nr.choice(size, size, replace=False) for i in range(size): j = nr.choice([a for a in range(size)], int(size/2), replace=False) for k in j: if k!=perm[i]: c = nr.choice(max_val-1)+1 A = A.elementary_row_op(op="n->n+km", row=perm[i], k=c, row2=k) return A def gen_sym_matrix(size: int, sym=True, max_val:int=3): """Generates a random integer valued (size)x(size)-matrix either symmetric or antisymmetric.""" A = gen_sq_matrix(size, 0, int(max_val/2)+1) if sym: # True = Symmetric A = A + A.transpose() x = nr.choice(size, 2*int(size/3), replace=False) for i in x: A[i, i] = A[i, i]/2 else: # False = Anti-symmetric A = A - A.transpose() return A def gen_column_vector(size: int, max_val: int =4, max_denom: int =1): return gen_matrix_rank(size=(size,1), rank=1, max_val=max_val, max_denom=max_denom) def gen_row_vector(size: int, max_val: int =4, max_denom: int =1): return gen_column_vector(size=size, max_val=max_val, max_denom=max_denom).T def gen_monic_poly(degree: int, lin_factors: bool =False, variable: str ='x', max_val: int =4, max_denom: int =1): x = symbols(variable) A = gen_sq_matrix(size=degree, det=nr.choice(3), max_val=max_val, max_denom=max_denom) if lin_factors else gen_diagonal_matrix(size=degree, det=nr.choice(max_val), max_denom=max_denom) return (A.charpoly(x)).as_expr() def gen_poly(degree: int, lin_factors: bool =False, variable: str ='x', max_val: int =4, max_denom: int =1): return ((-1)**nr.choice(2))*(nr.choice(max_val)+1)*gen_monic_poly(degree=degree, lin_factors=lin_factors, variable=variable, max_val=max_val, max_denom=max_denom) def gen_lin_comb(vectors: Vectors, max_val: int =4, max_denom: int=1): vecs, scalars = Matrix([sympify(a) for a in vectors]), gen_row_vector(size=len(vectors), max_val=max_val, max_denom=max_denom) return (scalars*vecs)[0]
STACK_EDU
Since my first post about my PhD struggles I haven’t posted anything for five months. I got much more productive shortly afterwards. Back then I started being open with my struggles, which included writing about them, but also talking about them with colleagues. Being open about that is definetly important. It had helped and I subsequently did a lot of progress, which was good because conference deadlines were coming up. I was actually quite happy about that, because my supervisors started meeting with me very regularly and I enjoyed this, almost as if that were socializing. I’m definetly like feeling part of a team. In the end not everything turned out as I had hoped, part of which was not under my control, but nonetheless I’m happy with the experiences and progress that I did. However, in that time I didn’t have much time to update this blog. For me writing is like therapy, it relaxes me, provides reflexion and perspective and I feel like writing gives me some control back over my life. Thus, when things are going well, when my life has structure and a good routine, I don’t really need that additional control anymore (though I still think it helps). And as a result, almost forgot about this blog. Today I got motivation to take it up again, because I got an email comment 1 from someone telling me that they’d love to read more. This made my day 🤩. I can barely believe that someone found this and went through the trouble of messaging me, despite it not being much advertised anywhere. One of my greatest desires whenever I do something is create something of value 2. And even if only a single person sees value in something, that’s a lot of motivation to keep going. The timing is currently good because the post-conference summer is more relaxed now. At the same time, I sometimes have difficulties when there is little external pressure giving me structure, so I’ll try to work on my own systems and habits, on of which might be writing. So far I had been writing mostly when I was not feeling well, which might give this blog a selection bias and readers might think that I’m a pessimistic, fearful procrastinator, while most of the time I’m quite the contrary, only occasionally I have such moods. I still have to find a niche what to write here. Should I mostly write personal things? Tell more about my hobbies? Post pictures? Those things might also overlap heavily with my twitter. Or should I strictly write professional programming posts so that I can link this blog to my linkedin? I’m not sure, I guess I’ll just write and see what will become of this. : I found that the desire to create value quickly can also result in procrastination. For my research, as typical in academia, I often have to put in weeks of frustrating work until I can show a useful result and am lucky enough that the data makes sense. I found that I enjoy procrastinating by writing software tools to make an analysts life more comfortable, because often I can create something useful in a couple of days and don’t have to worry about issues with the data. It’s not just useless “immediate gratification” vs. something useful long-term, there are often useful projects with faster (but not immediate) gratification vs. long-term with weeks until you see progress. But sometimes the latter are those that really matter. This is why most anti-procrastination guides tell you to split your work into small managable chunks, do pomodoros intervalls etc. ↩︎ 2021-08-02 00:00 (Last updated: 2023-02-02 16:10)
OPCFW_CODE
In an effort to get caught-up with the Cloud Native space, I am embarking on an effort to build a completely dynamic Kubernetes environment entirely through code. To accomplish this, I am using (and learning) several technologies, including: - Container OS (CoreOS) for the Kubernetes nodes. - Ignition for configuring CoreOS. - Ansible for automation and orchestration. - VMware NSX for micro-segmention, load balancing and DHCP. There are a lot of great articles on the Internet around Kubernetes, CoreOS and other Cloud Native technologies. If you are unfamiliar with Kubernetes, I highly encourage you to read the articles written by Hany Michaels (Kubernetes Introduction for VMware Users and Kubernetes in the Enterprise – The Design Guide). These are especially useful if you already have a background in VMware technologies and are just getting started in the Cloud Native space. Mr. Michaels does an excellent job comparing concepts you are already familiar with and aligning them with Kubernetes components. Moving on, the vision I have for this Infrastructure-as-Code project is to build a Kubernetes cluster leveraging my vSphere lab with the SDDC stack (vSphere, vCenter, vSAN and NSX). I want to codify it in a way that an environment can be stood up or torn down in a matter of minutes without having to interact with any user-interface. I am also hopeful the lessons learned whilst working on this project will be applicable to other cloud native technologies, including Mesos and Cloud Foundry environments. Logically, the project will create the following within my vSphere lab environment: I will cover the NSX components in a future post, but essentially each Kubernetes environment will be attached to a HA pair of NSX Edges. The ECMP Edges and Distributed Logical Router are already in place, as they are providing upstream network connectivity for my vSphere lab. The project will focus on the internal network (VXLAN-backed), attached to the NSX HA Edge devices, which will provide the inter-node network connectivity. The NSX Edge is configured to provide firewall, routing and DHCP services to all components inside its network space. The plan for the project and the blog series is to document every facet of development and execution of the components, with the end goal being the ability of anyone reading the series to understand how all the pieces interrelate with one another. The series will kickoff with the following posts: - Bootstrapping CoreOS with Ignition - Understanding Ignition files - Using Ansible with Ignition - Building Kubernetes cluster with Ansible - Deploying NSX components using Ansible - Deploying full stack using Ansible If time allows, I may also embark on migrating from NSX-V to NSX-T for providing some of the tenant software-defined networking. I hope you enjoy the series!
OPCFW_CODE
RMA Admin Panel Screen Info Here we can find the basic information about cuastomer name, reference number, customer email, status, shipping address,messages and comments. In this screen you will be able to see the items for RMA like reasons,request and comments with the quantity. You can exchange messages with the client using the “Messages” tab : from here you can send a new message : then a notification email is sent to customer and your message is added to the thread. Customer can reply from his/her customer account. Note : if you enabled option stores > configuration > boostmyshop > rma > customer notification > Automatic customer notification on RMA status change, the customer will receive an email each time the RMA status changes. The “History” tab lists every events related to the RMA, an entry is added when : RMA status changes Customer or admin is notified A product is refunded OR returned in stock Once you are in a RMA, you can manage RMA using the statuses : - Draft : you are creating the RMA, it is not visible for customer - Requested : customer sent a return request, it is pending admin approval - Accepted : you accepted the return, customer must print the return form (from its customer account or using the link into the email sent) - Processing : you received the products, you are going to process them - Complete : you have processed the RMA (you processed refunds) From the above screen of RMA you will be able to print, send e-mail and PROCESS This is the final step for a RMA. TO process a RMA, go within the RMA and click on the “Process” button. Then a new screen is displayed where you can select the actions to performed : - For each product, you can decide the quantity to put back to stock, and the quantity to refund. Note : you can not refund a product if it has NOT been invoiced. - You can decide to refund shipping using the “Yes / No” drop down menu - Last, you can change the amount refunded using the adjustment textboxes : “Refund fee” and “Refund adjustment” Every time you change a refund option, the Total refunded is updated at the bottom. Once everything is done, you can click on button “Complete RMA” to perform the selected actions. Then the RMA status goes to “Complete” and the customer receives and email There are 2 process methods in RMA: This screen has the basic information about the product name, reason for return, request, comments, quantity to return, price paid. As an example in the below image, The reason for the Return is the product with wrong size. So it can be added to the stock again and refund a new product with correct size to the customer. On the return dropdown choose the quantity and warehouse. Then fill the refund columns and click on Complete return. Then the status will be changed to complete and the customer receives an email and the returned product will be added to the stock. You can not refund a product if it has not been invoiced yet. This screen is same as the process a refund one, with an additional checkbox in the exchange column. If you tick this checkbox, a new pop up will be displayed allowing you to select which product exchange the current one with. Then, select a product to exchange with and finally choose the shipping method. Click on complete to finalize the exchange process.
OPCFW_CODE
import json import logging from ava.common.exception import MissingComponentException # configure logging logger = logging.getLogger(__name__) class JsonReporter: """ Reports issues in JSON format to a given file. This is used to save results at the end of a scan. """ def __init__(self, results, configs, auditors, checks, vectors): """Sets the reporter's collection of results, configs, vectors, checks, and auditors""" self._results = results self._configs = configs self._auditors = auditors self._checks = checks self._vectors = vectors def report(self, filename, start_time, end_time): """ Saves results to a given file in JSON format. :param filename: file name :param start_time: start datetime :param end_time: end datetime """ # calculate times times = {'start': str(start_time), 'end': str(end_time), 'duration': str(end_time - start_time)} # list auditors auditors = [{'key': a.key, 'name': a.name, 'description': a.description} for a in self._auditors] # list checks checks = [{'key': c.key, 'name': c.name, 'description': c.description} for c in self._checks] # generate output output = { 'times': times, 'configs': self._configs, 'auditors': auditors, 'checks': checks, 'vectors': self._vectors, 'results': self._results } try: # dump to file with open(filename, 'w') as f: json.dump({"report": output}, f, indent=1) except OSError as e: raise MissingComponentException("{} '{}'".format(e.strerror, e.filename))
STACK_EDU
After some head banging, I finally managed to send SCPI commands from my Mac to my Agilent (Keysight) DSOX2002A. I’ve worked on this since december last year. Almost a year, but I did not spent a sustained amount of effort. I did it just during my free time. I still have a day job that requires most of my time and focus, cannot afford too much time for my hobbies (unfortunately). Yes, programming is a hobby. I am not making a living out of it (despite some opinions). But I did it ! It was mostly an ambition I had. I was so pissed off when I realized that there are absolutely no OS X drivers for Agilent’s tools that I decided to make my own drivers and applications for Mac. This is how my console looks when the scope is issued a :POD1:DISPlay 1 SCPI command: Agilent Technologies Pipe ref 1: Bulk OUT Pipe ref 2: Bulk IN Pipe ref 3: Interrupt IN Enter a SCPI command… USBTMC: usbtmc_write called USBTMC: Can send remaining bytes in a single transaction… USBTMC: setup I/O buffer for DEV_DEP_MSG_OUT message… USBTMC: Instrument command: :POD1:DISPlay 1 USBTMC: Append write buffer (instrument command) to USBTMC message… USBTMC: Check if this is the last transfer… USBTMC: n_bytes: 29 USBTMC: this_part: 17 USBTMC: Add zero bytes to achieve 4-byte alignment… n_bytes: 32 Added 0x00 for: usbtmc_buffer Added 0x00 for: usbtmc_buffer Added 0x00 for: usbtmc_buffer USBTMC: Buffer content is: usbtmc_buffer = 01 usbtmc_buffer = 01 usbtmc_buffer = FE usbtmc_buffer = 00 usbtmc_buffer = 11 usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 01 usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 3A usbtmc_buffer = 50 usbtmc_buffer = 4F usbtmc_buffer = 44 usbtmc_buffer = 31 usbtmc_buffer = 3A usbtmc_buffer = 44 usbtmc_buffer = 49 usbtmc_buffer = 53 usbtmc_buffer = 50 usbtmc_buffer = 6C usbtmc_buffer = 61 usbtmc_buffer = 79 usbtmc_buffer = 20 usbtmc_buffer = 31 usbtmc_buffer = 0A usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 00 usbtmc_buffer = 00 USBTMC: End buffer content. USBTMC: store bTag (in case we need to abort)… USBTMC: increment bTag – and increment again if zero… USBTMC: Incremented bTag = 2 Program ended with exit code: 0 When SCPI instructions are sent, these must be wrapped by the transmission routine in a REQUEST_DEV_DEP_MSG_OUT wrapper. This is a requirement of the USBTMC USB488 subclass specification. I had many issues with this until I managed to get it right. Again, the documentation is unbelievable difficult to find. You might think that usb.org should have it ? Well, good luck finding it on their site. When googling for „USBTMC USB488 Subclass Specification” or, at least, „USB488”, one of the first results is a link to a repo of the Physics Department at the University of California, San Diego. You have to be kidding me ! Universities still the best at this. (you might also want to check this link). The user instruction (SCPI command) always starts at byte 13 ( usbtmc_buffer) and always ends with a 0x0A, a carriage return character, at byte 28 ( usbtmc_buffer in the above example). It is important to have a 0x0A that terminates the user instruction otherwise the instrument will return a Query Interrupted error. Don’t forget the +1 shift in numbering due to the fact that the buffer starts at index : … usbtmc_buffer = 3A usbtmc_buffer = 50 usbtmc_buffer = 4F usbtmc_buffer = 44 usbtmc_buffer = 31 usbtmc_buffer = 3A usbtmc_buffer = 44 usbtmc_buffer = 49 usbtmc_buffer = 53 usbtmc_buffer = 50 usbtmc_buffer = 6C usbtmc_buffer = 61 usbtmc_buffer = 79 usbtmc_buffer = 20 usbtmc_buffer = 31 usbtmc_buffer = 0A … The first 12 bytes contain the REQUEST_DEV_DEP_MSG_OUT header. The total number of bytes must be divisible by 4. When commands do not achieve this, there should be a rounding procedure to the closest upper multiple of 4, that set nulls for the additional bytes (in order to preserve the multiple of 4–byte boundary alignment): … usbtmc_buffer = 0A –> Carriage return = \n usbtmc_buffer = 00 –> null byte for boundary padding usbtmc_buffer = 00 –> null byte for boundary padding usbtmc_buffer = 00 –> null byte for boundary padding usbtmc_buffer = 00 –> null byte for boundary padding; total: 32 bytes -> divisible by 4 I dug into the only available decent piece of USBTMC open-source, the Linux driver made by Stefan Kopp. The usbtmc_read and write routines were adapted for OS X and included in my client–space application. For now, the application’s main entry point uses a static char to pass all commands for tmc488 wrap–up and further on the bulk–out USB pipe towards my scope. See below: char text = ":POD1:DISPlay 1\n"; //char text = ":SAVE:IMAGe:FORMat PNG\n"; //char text = ":SAVE:IMAGe:STARt somefile.png\n"; //char text = ":DISPlay:ANNotation:BACKground OPAQue\n"; //char text = ":DISPlay:ANNotation:TEXT 'This is an Agilent DSOX2002A... and I have managed to control it from my Mac... with a custom-developed USBTMC driver...'\n"; //char text = "DISPlay:ANNotation:COLor RED\n"; //char text = "DISPlay:VECTors 0\n"; uWrite(text, sizeof(text)); And this is the result of the :DISPlay:ANNotation:TEXT command plus several other (see above). This is how it looks on Agilent’s screen: All this work was a bit of a nightmare. Luckily, I was inspired enough not to quit when I went through the most difficult moments, like nothing seemed to be right and working. I just left the project to rest for a while and went back to it when I had enough sleep or energy. However, despite the scarcity of the prototype’s functionalities, this is a major success for me. The simple fact that I was able to implement from scratch an USB communication protocol on a different platform (OS X) than the mainstream (Windoze), without having any examples or previously–released web–discoverable projects — this is big for me. My satisfaction is huge. This projects open some new opportunities for porting SCPI on Mac and having various libraries and code snippets and many other Open–Source projects for the entire community of enthusiasts. That’s all for now. Next step is to refine the client–side application and make it work with commands sent from terminal (parsing scanf probably). After I cross–check that all’s ok with the read and write routines from client–space, I will port these in a pure serial driver that will create entry points in /dev. I believe it will be much easier to work with a POSIX file because it can also be accessed from applications like screen or CoolTerm etc. I will share the drivers when ready but, beware, use it on your own risk.
OPCFW_CODE
Novel–Let Me Game in Peace–Let Me Game in Peace Chapter 1063 – Lady Supreme Yin tremendous volleyball w.a.n.g Qiuyuan checked out Shen Yuchi and found him nod a little. Only then did he bow to the Moon G.o.ddess statue. “Thank you, Your Excellency for your reward. I want that pearl.” After he bowed, there had been no result in the temple. The Moon G.o.ddess didn’t communicate yet again. The pearl became even brighter, nevertheless it didn’t fly out. Having said that, Zhou Wen wasn’t sure if what Shen Yuchi had stated was accurate. Equally as he was approximately to inquire about something different, he suddenly noticed a creak because the home to the Moon G.o.ddess Temple established. Following he bowed, there were no result in the temple. The Moon G.o.ddess didn’t converse again. The pearl grew to become even happier, but it surely didn’t fly out. “Did that individual lie in my experience?” Shen Yuchi’s concept altered. As Zhou Wen believed to him or her self, he noticed a pearl as well as a jade fall take flight right out of the Moon G.o.ddess sculpture and terrain for the wood made kitchen table while watching statue. “Director-Typical, I think we must go in and retrieve them our selves. Why never I go in and take a look first?” w.a.n.g Qiuyuan stated because he looked at the luminous pearl and jade slip. the young explorers penshurst Shen Yuchi and w.a.n.g Qiuyuan had been overjoyed. That they had been awaiting this instant. Right after he bowed, there was clearly no result out of the temple. The Moon G.o.ddess didn’t communicate yet again. The pearl grew to become even nicer, but it surely didn’t travel out. Shen Yuchi also wore a confused seem. This became completely different through the info he obtained gathered, but his data couldn’t be completely wrong. Thus, he didn’t understand what obtained gone drastically wrong. Zhou Wen was alarmed. Ice Maiden was an ice-elemental Terror creature. The creature from the temple needed to be unimaginably strong to hold her. aaron’s montana brides During the past, most of the dimensional critters got put noticeable traps. They clearly well informed individuals on the risk and dared them to are available above. Previously, the majority of the dimensional pets experienced set obvious traps. They clearly up to date folks of the real danger and dared them to are available through. Zhou Wen endured there motionless. Even an existence like Ice-cubes Maiden were iced. If he transferred, he might freeze out even faster than her. An ice pack Maiden was right. It’s very best never to enter in such a put Zhou Wen located the fishing line comfortable. However delivery was unique, he appeared to have experienced a comparable scenario. Inside of the Moon G.o.ddess Temple, there seemed to be a timber sculpture. It turned out a dignified and beautiful lady. what are the types of one act play Even so, with each step she took, frost footprints showed up on the ground. Just after taking a number of techniques back again, she was freezing as if she got turned into a jade sculpture. However, there wasn’t any frost in her body system, she provided off a sensation that she was freezing. It turned out extremely strange. Shen Yuchi also wore a puzzled start looking. This became very different through the information and facts he possessed received, but his data couldn’t be wrong. Therefore, he didn’t figure out what acquired removed improper. Furthermore, Zhou Wen acquired many Mythical Partner Beasts now. There was no need to take the danger. The Mysterious Heiress: Researcher In Disguise Even so, with each step she took, frost footprints appeared on the ground. Following choosing a very few steps back, she was iced as though she acquired changed into a jade statue. Although there wasn’t any frost on her system, she gifted off a emotion she was iced. It absolutely was extremely bizarre. Zhou Wen found it weird. He experienced previously selected Associate Chicken eggs. After a selection was made, the Companion Beasts would typically fly through by themselves. Shen Yuchi and w.a.n.g Qiuyuan clearly believed the exact same. The 2 main of these were actually overjoyed. Zhou Wen’s heart and soul stirred as he immediately looked at some thing. Heartwarming Aristocratic Marriage: Influential Masterâs Wife-Chasing Strategy What is happening? Is not she giving a Friend Monster? Why doesn’t she allow them to take it? Could this be the Moon G.o.ddess’s rip-off? Now, this has been definitely the opportunity for him to get success within a single move. In case the other inspectors who obtained incorporate him hadn’t died, this chance may well not have landed on him. Ice cubes Maiden was correct. It is most effective to not ever get into such a position the optimist’s daughter sparknotes Zhou Wen’s head raced, but he couldn’t imagine a good answer. That is appropriate. She’s Girl Supreme Yin to commence with… There’s a 2nd explanation on the name… Now that the Moon G.o.ddess got appeared, the disrespectful Zhou Wen and An ice pack Maiden would naturally be punished. Having said that, with every step she needed, frost footprints came out on the floor. Immediately after going for a few actions backside, she was freezing almost like she experienced become a jade statue. However, there wasn’t any frost in her system, she provided off a emotion she was freezing. It absolutely was extremely odd. w.a.n.g Qiuyuan didn’t know what you can do. He didn’t dare enter the Moon G.o.ddess Temple, so he could only have a look at Shen Yuchi. Zhou Wen withstood there motionless. Even an presence like An ice pack Maiden were freezing. If he shifted, he might hold even faster than her. Nonetheless, Moon G.o.ddess didn’t do this. She initially instructed them in the rewards, only to remove them after they moved through. Zhou Wen sensed that she shouldn’t be referred to as Moon G.o.ddess, but a Scamming G.o.ddess. Novel–Let Me Game in Peace–Let Me Game in Peace
OPCFW_CODE
/* Operations on 36-bit pseudo words Multics 36bit words are simulated with 64bit integers. Multics uses big endian representation. Bit numbering is left to right; bit number zero is the leftmost or the most significant bit (MSB). The documentation refers to the right-most or least significant bit as position 35. Note that with the LSB zero convention, the value of a bit position matches up with its twos-complement value, e.g. turning on only bit #35 results in a value of 2 raised to the 35th power. With the MSB zero convention used in Multics, turning on only bit 35 results in the twos-complement value of one. The following macros support operating on 64bit words as though the right-most bit were bit 35. */ /* Copyright (c) 2007-2013 Michael Mondy This software is made available under the terms of the ICU License -- ICU 1.8.1 and later. See the LICENSE file at the top-level directory of this distribution and at http://example.org/project/LICENSE. */ // ============================================================================ /* * Extract, set, or clear the (i)th bit of a 36-bit word (held in a uint64). */ #define bitval36(word,i) ( ((word)>>(35-i)) & (uint64_t) 1 ) #define bitset36(word,i) ( (word) | ( (uint64_t) 1 << (35 - i)) ) #define bitclear36(word,i) ( (word) & ~ ( (uint64_t) 1 << (35 - i)) ) // ============================================================================ /* * getbits36() * * Extract a range of bits from a 36-bit word. */ static inline t_uint64 getbits36(t_uint64 x, int i, unsigned n) { // bit 35 is right end, bit zero is 36th from the right int shift = 35-i-n+1; if (shift < 0 || shift > 35) { log_msg(ERR_MSG, "getbits36", "bad args (%012llo,i=%d,n=%d)\n", x, i, n); cancel_run(STOP_BUG); return 0; } else return (x >> (unsigned) shift) & ~ (~0 << n); } // ============================================================================ /* * setbits36() * * Set a range of bits in a 36-bit word -- Returned value is x with n bits * starting at p set to the n lowest bits of val */ static inline t_uint64 setbits36(t_uint64 x, int p, unsigned n, t_uint64 val) { int shift = 36 - p - n; if (shift < 0 || shift > 35) { log_msg(ERR_MSG, "setbits36", "bad args (%012llo,pos=%d,n=%d)\n", x, p, n); cancel_run(STOP_BUG); return 0; } t_uint64 mask = ~ (~0<<n); // n low bits on mask <<= (unsigned) shift; // shift 1s to proper position; result 0*1{n}0* // caller may provide val that is too big, e.g., a word with all bits // set to one, so we mask val t_uint64 result = (x & ~ mask) | ((val&MASKBITS(n)) << (36 - p - n)); return result; } // ============================================================================ /* * bit#_is_neg() * * Functions to determine if bit-36, bit-18, or bit-n word's MSB is * on. */ #define bit36_is_neg(x) (((x) & (((t_uint64)1)<<35)) != 0) #define bit18_is_neg(x) (((x) & (((t_uint64)1)<<17)) != 0) #define bit_is_neg(x,n) (((x) & (((t_uint64)1)<<((n)-1))) != 0) //============================================================================= /* * sign36() * * Extract a 36-bit signed value from a 36-bit word. * */ static inline t_int64 sign36(t_uint64 x) { if (bit36_is_neg(x)) { t_int64 r = - (((t_int64)1<<36) - (x&MASK36)); return r; } else return x; } /* * sign18() * * Extract an 18bit signed value from a 36-bit word. * */ static inline int32 sign18(t_uint64 x) { if (bit18_is_neg(x)) { int32 r = - ((1<<18) - (x&MASK18)); return r; } else return x; } /* * sign15() * * Extract an 15bit signed value from a 36-bit word. * */ static inline int32 sign15(uint x) { if (bit_is_neg(x,15)) { int32 r = - ((1<<15) - (x&MASKBITS(15))); return r; } else return x; } /* bits2num() Extract an (nbits-1)bit signed value from a bit string of length nbits. */ static inline int bits2num(unsigned nbits, unsigned x) { #if 1 // make compiler happer about nbits - 1 if (nbits < 2 || nbits >= (8*sizeof(int))) return ~0; #endif unsigned nb = nbits - 1; if (x > MASKBITS((nb))) { int r = - ((1<<nb) - (x&MASKBITS(nb))); return r; } else return x; } //============================================================================= /* * negate36() * * Negate an 36-bit signed value. Result is in Multics representation. * */ static inline t_int64 negate36(t_uint64 x) { // overflow not detected if (bit36_is_neg(x)) return ((~x & MASK36) + 1) & MASK36; else return (- x) & MASK36; } /* * negate18() * * Negate an 18-bit signed value within the lower part of a 36bit word. * Result is in Multics representation; use sign18() * to extract values for computation. * */ static inline int32 negate18(t_uint64 x) { // overflow not detected if (bit18_is_neg(x)) return ((~x & MASK18) + 1) & MASK18; else return (- x) & MASK18; } /* * negate72() * * Arguments are pointers to two 36-bit words, one holding the high bits and * the other the low bits. * Result is in Multics representation. * */ static inline void negate72(t_uint64* hip, t_uint64* lop) { // FIXME? -- overflow not detected/reported. *hip = (~ *hip) & MASK36; *lop = (~ *lop) & MASK36; ++ *lop; if ((*lop >> 36) != 0) { *lop &= MASK36; ++ *hip; *hip = *hip & MASK36; } } //=============================================================================
STACK_EDU
Jared06/01/2020, 6:52 PM tasks to run 10 times (this is using Core/not on cloud). Am I missing something obvious? When I initially register the flow, 10 runs get queued and executed, but no more. In the terminal running the server, the scheduler wakes, schedules 0 runs, and sleeps even if all 10 original runs have passed. On-demand runs in the UI still work at this point. I'll comment with what I'm doing to reproduce. prefect server start both run fine. Then from the interpreter in the same env: prefect agent start from datetime import timedelta, datetime from prefect.schedules import IntervalSchedule @task def getone(): return 1 schedule = IntervalSchedule( start_date=datetime.utcnow() + timedelta(seconds=1), interval=timedelta(minutes=1), ) with Flow("testflow", schedule=schedule) as flow: getone() flow.register() Kyle Moon-Wright06/01/2020, 7:15 PM Jared06/01/2020, 8:00 PM scheduler_1 | [2020-05-31 12:10:53,195] INFO - prefect-server.Scheduler | Scheduled 0 flow runs. graphql_1 | INFO: 192.168.0.6:55604 - "POST /graphql/ HTTP/1.1" 200 OK scheduler_1 | [2020-05-31 12:10:53,296] DEBUG - prefect-server.Scheduler | Sleeping for 300.0 seconds... Kyle Moon-Wright06/01/2020, 8:37 PM on my own Prefect Server (using 0.11.4) with a Local Agent and after 5 flow runs, the UI repopulates the queue to 10 Upcoming Runs on my Dashboard. Would you mind opening issue on Github for greater visibility for the team? scheduler_1 | [2020-06-01 20:43:28,833] DEBUG - prefect-server.api.schedules | Flow run <flow_run_id> of flow <flow_id> scheduled for 2020-06-01T20:53:20.963042+00:00 # this occurs x5, x1 for each run scheduler_1 | [2020-06-01 20:43:28,856] INFO - prefect-server.Scheduler | Scheduled 5 flow runs. scheduler_1 | [2020-06-01 20:43:28,972] DEBUG - prefect-server.api.schedules | Schedule <schedule_id> was not ready for new scheduling. scheduler_1 | [2020-06-01 20:43:28,972] INFO - prefect-server.Scheduler | Scheduled 0 flow runs. scheduler_1 | [2020-06-01 20:43:29,073] DEBUG - prefect-server.Scheduler | Sleeping for 300.0 seconds...
OPCFW_CODE
import numpy as np from scipy import signal # savgol_filter from CompSlowDecompTest import CompSlowDecompTest import MathUtils class Plateau: def __init__(self, test: CompSlowDecompTest, cutoff_modulus: float = 0.4): strain_comp = test.strain[test.compression_range] stress_comp = test.stress[test.compression_range] (strain, stress) = signal.savgol_filter((strain_comp, stress_comp), window_length = 51, polyorder = 3) ders = MathUtils.derivative(strain, stress) plateaud_indices = np.where(np.logical_and(ders < cutoff_modulus, strain_comp[:-1] > 0.05))[0] if plateaud_indices.size > 0: self.start_idx = plateaud_indices[0] self.end_idx = plateaud_indices[-1] self.stress = np.mean(stress[self.start_idx:self.end_idx]) else: # there is no plateau, so we return the point with lowest derivative strain_start_idx = np.where(strain > 0.05)[0][0] # assume that there are location beyond that strain min_derivative = np.min(ders[strain_start_idx:-1]) min_der_idx = np.where(ders == min_derivative)[0][0] # assume we can find the minimum self.start_idx = min_der_idx self.end_idx = min_der_idx self.stress = stress_comp[min_der_idx] self.strain_start = strain_comp[self.start_idx] self.strain_end = strain_comp[self.end_idx]
STACK_EDU
Critical Microsoft Patches Cause Havoc Wednesday, May 24, 2006; 12:10 AM Healer, heal thyself: This month three Microsoft security fixes ended up causing a lot of folks serious problems with Internet Explorer, Office, and Outlook Express. A recent patch for Windows Explorer, distributed via Windows Update, essentially rendered Office unusable for many people, preventing them from opening or saving files. For others, IE's address bar refused to accept manually entered URLs. The trouble mainly affected users who have the HP Share-to-Web program, which is no longer distributed. It came with HP PhotoSmart software, any HP DeskJet printer with a card reader, and HP scanners; some HP cameras and optical drives also bundled the software. And certain PCs running older nVidia graphics cards had problems as well. Microsoft has issued a new patch via Windows Update . If you have Automatic Updates enabled, it will automatically determine if you need the "patched" patch. (See the tips below for more on configuring Automatic Updates.) Meanwhile, a patch for IE plugged eight critical holes in the browser but also altered IE's behavior in response to an ongoing patent lawsuit brought by a California university. The update adjusts the way IE handles commonly used ActiveX controls, particularly for plug-ins such as the Macromedia Flash Player. For the browser change to work correctly, Web sites have to make corresponding changes. Otherwise, every ActiveX control on a site requires an extra click to activate. Microsoft has been trying to get the word out to Web site managers, but of course many of the millions of sites out there didn't get updated. And users, who received little notice about the change, were caught off guard when many sites suddenly seemed broken. Microsoft released a temporary workaround that undoes the ActiveX control patch while leaving the security update intact. The fix is due to be phased out soon because of the continuing patent battle, but for now you can get it here . As if those bugs weren't enough, users have reported that their Outlook Express 6 address book vanished after they installed a Microsoft security patch for that program. Microsoft has said only that it is looking into the problem. Users on the company's forum say they were able to retrieve their address books by uninstalling the patch. Luckily, the bug it fixes isn't critical, so removing the patch seems to be an acceptable way to get OE's address book working again. Avoid Patch Crash: Tips for Staying Safe Don't let buggy patches goad you into disabling Automatic Updates. Instead, take charge with these steps. 1. Install at your command Set Windows Update to automatically download patches, but to install them only when you say so. Open the Control Panel, chooseSystem,and then click theAutomatic Updatestab. SelectDownload updates for me, but let me choose when to install them,and clickOK. 2. Check for problems When you're prompted to install patches, selectCustom Install (Advanced)to see a short description of each patch, as well as its Microsoft Knowledge Base (KB) number. Use that KB number to search for any reported problems at Microsoft's Security Response Center Blog or the company's Windows Update security newsgroup . 3. Prep a rollback Set a restore point before installing patches so you can always revert to a working configuration. You can also remove most patches via the Windows Add/Remove Programs utility, which lists the date and KB number for each installed patch as long as you check theShow Updatesbox up top. Remove critical patches only as a last resort. Mozilla.org has patched a half dozen critical security flaws in its Firefox browser. Versions 184.108.40.206 and newer or 1.0.8 and newer will protect you. You can download the latest version of Firefox at www.getfirefox.com . For more info on the bugs, click here . Found a hardware or software bug? Send us an e-mail on it to email@example.com .
OPCFW_CODE
Akamai's Adaptive media delivery product has been validated for use with Wasabi. Follow the steps outlined below to activate Wasabi cloud storage with Akamai's CDN network. - Active Akamai account - Active Wasabi account - Active public domain - Administrator access to domain provider - All information provided below is using my test domain "www.wasabi-support.com", this will be associated with Akamai's CDN network - The test domain is owned and managed by godaddy.com & i have admin rights to edit domain information such as DNS values to associate my domain with Akamai. Table of Contents: - Upload data to Wasabi storage - Configure Akamai - Staging & Activation of Akamai property - Configuration of hosting provider - Enable HTTPS - Access Hosted Content via Akamai CDN - Adding Different Wasabi regions to the same property Wasabi has verified many S3, FTP, FTPS clients for uploading data. Refer to our KBs for specific information on products/vendors Wasabi has been verified for use with. In order for the uploaded data to be delivered to a CDN vendor as Akamai, data stored on wasabi needs to be enabled for public access. Here is how. - Enabling public access to specific objects - Refer to information here - Enabling public access to all objects in a bucket - Refer to information here - Enabling public access to all objects in a folder - Refer to information here Login to your Akamai Control Center portal: 3) Click on Properties and click "Create property" 4) Provide a "Property Name" and click "Create Property" Note: Property name is only for internal Akamai use, Please contact Akamai for recommendations 5) Under "Property Hostnames" click "Add" 6) Once you click "Add" a pop-up window will appear and request you to provide the following: - Hostname - in the test outline here, we are using the hostname - akamaitest.wasabi-support.com where wasabi-support.com is the top-level domain and akamaitest is a sub-domain and click "Next" - Choose "IPv4 only" and click "Next" - Choose mapping solution, in this integration we are choosing VOD content and click "Next" - Confirm "Edge Hostnames" and click "Submit" - Confirmation on providing a property hostname: 7) In Property Configuration Settings section -> Behaviors -> Orgin Server click "Origin Type" and choose "Your Origin" 8) Provide the following info: - Origin Type - Your Origin - Origin Server Hostname - s3.us-west-1.wasabisys.com - Forward host header - Origin Hostname - Cache key Hostname - Origin Hostname Note that this example discusses the use of Wasabi's us-west-1 storage region. To use other Wasabi storage regions, please use the appropriate Wasabi service URL as described in this article. Note: Additional info provided below on to setup Akamai for using different Wasabi regions as part of the same property. 9) Leave default values for SSL & ports as shown below. Under the "Content Provider Code" click "Create New" Note: Content provider code is used for Akamai's billing & reporting purposes. Please contact Akamai for additional details 10) A pop-up window will default to a "Content Provider Code name" Click "Create" 11) Leave defaults for rest of the configuration elements. 12) Scroll all the way down to the end of the page and click "Add Behavior" 13) In "Add a Behavior for this rule" pop-up search for "Origin Base Path" 14) Click "Insert Behavior" 15) Provide the base Path value to match your cloud storage account, for example, my Wasabi account has a bucket named "akamawasabitest", i created a folder called "Videos" and inside this folder i have a video asset named "Why Wasabi is Different_Wasabi.mp4" as shown below. In this case the base path provided in Akamai control center would be "/akamaiwasabitest/Videos/" Akamai Origin Base Path and click "Save" 15) Once save completes, Navigate to "Activate" tab 16) Click "Activate v1 on Staging" - the configuration created above will be verified 17) Click "Activating v1 on Staging" 18) Activation process takes several mins to complete: 19) Once staging activation completes successfully, Click "Activate on Production" 20) Provide email address to be notified and Confirm activation 21) Activation on production takes about an hour wasabi-support.com domain is held by GoDaddy hosting provider. Login into your hosting provider's portal and add a CNAME entry as shown below: - host - akamaitest - Points to - akamaitest.wasabi-support.com - TTL - 60 mins (can be different) 22) Click "Create" and choose "Certificate" 23) Click "Create New Certificate" 24) Choose the best option to validate website's identity. As an example, i will be choosing Domain Validation (DV) and click "Next" 25) Choose "SAN" and click "Next" 26) Provide following and click "Next" - common-name for the certificate - company info such as address, contact info etc 27) Provide contact details: 28) Choose "Standard TLS" and click "Review" 29) Click "Submit" 30) A pop-up will appear requesting to validate the domain 31) Validate the domain, in the following example, validating the domain via DNS Token, a TXT value, Take the appropriate steps with the website hosting vendor to validate the domain. 32) Using a browser navigate to "https://akamaitest.wasabi-support.com/Why%20Wasabi%20is%20Different%20_%20Wasabi.mp4" - URL is HTTPS based - Top level URL contains akamaitest.wasabi-support.com - Since we offered base path to be "akamaiwasabitest/Videos" - the asset needs to be called out after the URL 33) Under "Property Configuration Settings" click "Add Rule" 34) Choose "Blank Rule Template" and click "Insert Rule" 35) Hover over to right of the newly created rule "New Rule" and click the gear icon, choose "Edit Name" 36) Provide a name "East2 Rule", then click "Add Match", opt for "Path" in the first drop down option and set "matches one of" in the second drop down option and provide bucket/folder path. 37) Click "Add Behavior", in the pop-up search for "Origin Server" and click "Insert Behavior" 38) Provide the following info and click "Save" - Origin Type - Your Origin - Origin Server Hostname - s3.us-east-2.wasabisys.com - Forward Host Header - Origin Hostname - Cache Key Hostname - Origin Hostname 39) Following steps outlined from 33 thru 38 - additional Wasabi regions can be added to your Akamai property.
OPCFW_CODE
While it’s definitely not perfect, I have used the Feedback option in Team Development on a number of projects. It’s been a pretty handy way of quickly capturing basic feedback from users, all the while logging information about users and session state. But the workflow and follow up of issues has always been a challenge, and in fact, the APEX team has announced that it is now deprecated and will be removed in future releases. If you know anything about me by now, I just love a drool-worthy card-based UI. Add in some powerful drag and drop features, and I am all over it! So it should come as no surprise that my team uses Trello to track tasks and work. I therefore thought I would try to replace some of the features I loved about Feedback with a quick and easy ‘Create Trello Card’ button. Our Trello Setup We use different Trello boards to segregate the work between our teams. We also label our cards within boards to make them easy to filter by module or application. Finally, we tag specific members on cards when we know who needs to work on them. Lists of Values We created a few LOVs to track our different boards, labels, and team members. Trello provides some good documentation about how to automatically tag members and assign labels to cards via email, so I needed to have these available in my APEX application. Each board also has a nifty ’email-to-board’ address that allows you to auto-magically create a card via email. Add Link to Nav Bar I then created a simple modal page that is called from a permanent link in the Nav Bar, just like Feedback always was: Clicking on the link opens up my Modal page that looks like this: - The board selection basically returns the email address for it. - Selecting a tag adds a hashtag to the subject line that will automatically assign it to the right label on my Trello board. - The Team shuttle adds team member Trello usernames to the card description, which is the equivalent of tagging them as members of the card. I also wanted to provide users the ability to upload an attachment or screenshot, and of COURSE, I wanted to include Session State information should the user deem it relevant. By default, we set it to yes. Shout-out to LOGGER here, because, full disclaimer, I did sneak a peak at logger.log_apex_items to do something similar. No shame in not wanting to reinvent the wheel, amiright?? create or replace procedure CREATE_TRELLO_CARD( p_app_id in number, p_app_page_id in number, p_team in varchar2, p_include_session in varchar2, p_board in varchar2, p_label in varchar2, p_card_title in varchar2, p_description in varchar2, p_attachment in varchar2 l_session_state clob:='Session State:'||chr(13); l_scope logger_logs.scope%type := 'create_trello_card'; if p_include_session = 'Y' then for s in (select l_app_session, item_name, item_value -- Application items select 1 app_page_seq, 0 page_id, item_name, v(item_name) item_value where application_id = p_app_id -- Application page items select 2 app_page_seq, page_id, item_name, v(item_name) item_value where application_id =p_app_id and page_id=p_app_page_id order by app_page_seq, page_id, item_name) loop for i in 1..l_team.count l_id := apex_mail.send( p_to => p_board, p_from => 'email@example.com', p_subj => l_card_title||' '||nvl(p_label,''), p_body => p_description||chr(13)||l_session_state, p_body_html => p_description||chr(13)||l_session_state); for c1 IN (select filename, blob_content, mime_type FROM apex_application_temp_files where name=p_attachment) p_mail_id => l_id, p_attachment => c1.blob_content, p_filename => c1.filename, p_mime_type => c1.mime_type); when others then logger.log(l_errmsg, l_scope, null, l_params); -- handle exception..... Clicking on the button calls this page process: The Trello Card Here is a sample card created from the Survey Builder Packaged App. Notice the session state? I love this. You could probably improve on this and only pass through non-null values. Ultimately, we wanted a single-click way of allowing users or team members to provide feedback on our apps, mainly during the development/QA phase. Dimitri discusses other ways of handling this in his post here. This is an extremely simple way of mimicking some of the features we loved about Feedback and Team Development, and pushing issues/items out to a tool we’re using anyway. Hope this helps!
OPCFW_CODE
ChatGPT-4 Complete Course: Beginners to Advan ... - 9k Enrolled Learners - Live Class Datasets are an integral part of machine learning and NLP (Natural Language Processing). Without training datasets, machine-learning algorithms would not have a way to learn text mining, text classification, or how to categorize products. 5-10 years ago it was very difficult to find datasets for machine learning and data science and projects. But now we’ve been flooded with lists of datasets and now the problem is not finding a dataset, rather sifting through them to keep the relevant ones. So, in this article, we have curated a list of free datasets for machine learning for you. Transform yourself into a highly skilled professional and land a high-paying job with the Artificial Intelligence Course. Datasets for General Machine Learning In this context, “general” is referred to as Regression, Classification, and Clustering with relational data. Wine Quality – Properties of red and white vinho verde wine samples from the north of Portugal. The goal here is to model wine quality based on some physicochemical tests. Credit Card Default – Predicting credit card default is a valuable use for machine learning. This dataset includes payment history, demographics, credit, and default data. US Census Data – Clustering based on demographics is a tried and tested way to perform market research as well as segmentation. Datasets for Natural Language Processing NLP is all about text data. And for data like text, it’s important for the datasets to have real-world applications so that sanity checks can be performed easily. Enron Dataset – Email data from the senior management of Enron that is organized into folders. Amazon Reviews – It contains approximately 35 million reviews from Amazon spanning 18 years. Data includes user information, product information, ratings, and text review. Newsgroup Classification – Collection of almost 20,000 newsgroup documents, partitioned evenly across 20 newsgroups. It is great for practicing topic modeling and text classification. Finance & Economics Datasets for Machine Learning Financial quantitative records are kept for decades, hence this industry is perfectly suited for machine learning. Quandl: A great source of economic and financial data that is useful to build models to predict stock prices or economic indicators. World Bank Open Data: Covers population demographics and a large number of economic and development indicators across the world. IMF Data: The International Monetary Fund (IMF) publishes data on international finances, foreign exchange reserves, debt rates, commodity prices, and investments. Image Datasets for Computer Vision Image datasets are useful to train a wide range of computer vision applications, like medical imaging technology, face recognition, and autonomous vehicles. ImageNet: This de-facto image dataset for new algorithms is organized according to the WordNet hierarchy, where each node is depicted by hundreds and thousands of images. Google’s Open Images: A collection of around 9 million URLs to images annotated with labels spanning over 6,000 categories under Creative Commons. Indoor Scene Recognition: A specific dataset that contains 67 Indoor categories, and a total of 15620 images. Sentiment Analysis Datasets for Machine Learning Multidomain sentiment analysis dataset – Features product reviews from Amazon. IMDB Reviews – Dataset for binary sentiment classification. It features 25,000 movie reviews. Sentiment140 – Uses 160,000 tweets with emoticons pre-removed. Datasets for Deep Learning MNIST – Contains images for handwritten digit classification. It is considered a good entry dataset for deep learning as it is complex enough to warrant neural networks while being manageable on a single CPU. CIFAR – Contains 60,000 images broken into 10 different classes. YouTube 8M – Contains millions of YouTube video IDs and billions of audio and visual features pre-extracted by the latest deep learning models. Public Government Datasets for Machine Learning Machine learning models trained using public government data help policymakers to identify trends and prepare for issues related to population growth, aging, and migration. Food Environment Atlas – Contains data for local food choices that affect diet in the US. Chronic disease data – Contains data on chronic disease indicators across the US. The US National Center for Education Statistics – Data on educational institutions and education demographics from around the world. Datasets for Autonomous Vehicles Autonomous vehicles need to be trained with large amounts of quality datasets so that they can perceive their environment and surrounding objects accurately. Berkeley DeepDrive BDD100k – The largest dataset for self-driving AI. It contains around 100,000 videos of over 1,100-hour driving experiences at different times and weather conditions. Baidu Apolloscapes – Defines 26 different semantic items like cars, cycles, pedestrians, buildings, etc. Oxford’s Robotic Car – Over 100 repetitions of the same route through Oxford, UK, captured over a year. The dataset captures different combinations of traffic, weather, and pedestrians, along with changes like construction and roadworks. KUL Belgium Traffic Sign Dataset – Contains more than 10000+ traffic sign annotations from thousands of traffic signs in the Flanders region in Belgium. With this, we come to an end of this article on “25 Best Free Datasets for Machine Learning”. If you need to learn more about Machine Learning, Edureka’s Machine Learning Engineer Course makes you proficient in techniques like Supervised Learning, Unsupervised Learning, and Natural Language Processing. It includes training on the latest advancements and technical approaches in Artificial Intelligence & Machine Learning such as Deep Learning, Graphical Models and Reinforcement Learning. |AI and Machine Learning Masters Course| Class Starts on 30th September,2023 30th SeptemberSAT&SUN (Weekend Batch)
OPCFW_CODE
These are development notes for May, 2022. This month I revisited some aspects of the game to try and find ways to squeeze out some performance, because the current performance made it too laggy on some devices. I found some chocking issues! A lot of the geometry was using standard settings and was thus unoptimized for the longest time! For example, the golem enemy characters in the game were each generating around 60 000 vertices, which is crazy. How can a basic design, that almost looks like just a couple of blocks thrown together, generate this much geometry? When I created the enemy back in 2021, I wasn’t really concerned about making anything “perfect”. The idea was to just get something done quickly, as a placeholder, to focus instead on designing the overall gameplay first. Then in the future I would dive in and create all the characters, level art, etc more properly. So I just threw stuff together without thinking much about it. I used built–in geometrical shapes that come with the Godot Game Engine. Now, these shapes have a standard setting. For example, a capsule shape has a standard setting of 64 Radial Segments and 8 Rings. This gives me way too many polygon details for a basic character. Even if the character had this high definition details, they wouldn’t be visible because of how far away the camera is. Now, the golem character is made up from six of these capsules. One for the body. One for the head. One for the left eye. One for the left eye pupil. One for the right eye. One for the right eye pupil. After reducing the Radial Segments and Rings for each object/capsule, I managed to reduce the vertices from 60 000 down to 2000. All without affecting the look/design of the golem. However, I am pretty sure I can reduce it even further to around 600 vertices once I design it properly in the future. Making these changes made me think, “heey, wait a minute, what about the player character?”. Lo and behold, I discovered the same issue. But even worse. The player character had, not six capsule shapes, nope, it had eight shapes. And even–EVEN worse, I discovered I had increased the total Radial Segments and Rings for whatever reason I can’t remember. But it seems like I was being lazy while designing the hair and found that increasing the values would give me the results I was looking for! Instead of spending an extra minutes or two to solve the design problem properly! Shameful laziness! My excuse back then was, again: “hey, I will fix everything later once it is time to design stuff in Blender”. But I don’t think that’s a good excuse that justifies 130 000 vertices on a simple blocky character—yikes! After applying the same fix as I did for the golem enemy, I managed to bring down the total vertices from 130 000 down to 9000 vertices. I could have brought it down to around 3000, but then the shapes would be way too blocky. And because it is the main character it does make sense to have it higher in quality than everything else. Now when I hit the play button and start playing on the first level, I am getting a total of 30 000 vertices. I was getting 240 000 before these changes! Up next was the level itself… not much could be done here because everything is made up from basic blocks. But I played around with the batch size and managed to squeeze away 6000 vertices. So now the first level generates 24 000 vertices. This is literally ten times less than what I was getting originally. So instead of 240 000 vertices, I am now getting 24 000 vertices, just by doing some minor changes. Hopefully these changes will make the game more playable on mobile devices. Quality of life changes Some issues were discovered that made the game an annoyance at times. I’ve been putting these issues on hold in favor of gameplay content and working on my other projects. But the issues were getting on my nerve lately as I have been playing my game and showing it off to family and friends. So I decided to dedicate an entire weekend just to work out these issues. Gamepad and keyboard not working Players have been complaining about not being able to navigate the options menu when using their gamepad or keyboard. The workaround was to use the mouse to apply all the settings before using the gamepad or keyboard to play. This was of course an oversight on my part when I developed the options menu, I was play–testing using the mouse and expected it to work just fine for keyboards and gamepads. I was completely and utterly wrong! Implementing the functionality to navigate the options screen using gamepads and keyboards was definitely not an easy task! A lot of code and logic had to be written just to make it work, which took a lot of time to figure out as well. I basically refused to give up, I want my game to feel like it was made to be played using gamepads, so naturally there had to be ZERO resistance/blocks/annoyances when players played the game using their gamepads. But it was also worth the trouble because I learned some new solutions that I can migrate over to my other projects! Anyways, some of the design issues were: how would people prefer to navigate a slider and then return to the button menu or move down to the next setting? For example, when you push the right key on the settings button, the focus goes towards the audio volume slider, now if you push the left key in hopes of returning back to the settings button, you’ll instead lower the volume! In this special case I mapped it so that when the slider is focused you’ll have to push the UP key to return back to the settings button. However, this is not the case for the other buttons in the settings menu. If the Toggle Fullscreen button or Toggle Touch-controller button is focused/selected, and the user push the left key, then they are taken back to the settings button. This might be confusing for some, but I hope it’s such a small issue that people will see past it. And hopefully people will see the logic behind it. However, I’m hoping people will naturally just go down the list of settings, and go back when the focus is on a button and not on a slider. No feedback when changing settings Is fullscreen on, or is it off? Is the touch-screen controller on or off? There was no way to find the answer to these questions, the player had to either check the edge of their screen to see if there was any visible border, which meant fullscreen was off. As for checking whether touch-screen controls was on or off, you had to start a new game! Now, although these are minor issues, it still caused an annoyance, and, for some, it even lowered the quality of the game! I didn’t quite know how to resolve this because I’ve not figured out the art direction of the game yet. So, dedicating a lot of time to the design of the menu interface seemed like a waste of time since I would end up re-designing everything in the future anyways, once I have decided on the art direction. The solution was a simple one, well at least it communicates the necessary information the player needs, and that’s the most important thing right now. Below you’ll see the primitive, yet effective solution: RED = OFF GREEN = ON It’s not the prettiest looking design, but I think it works for the time being. Anyways, that’s all I could work on for now. Hopefully I will find some time next month to continue fixing issues before I develop new things for the game. I’m also working on a new patch-version-system-thingy to make it easier to keep track of what version the game is. I thought I was being clever by calling each version by the time of the date the game was updated. But it’s beginning to become difficult to keep track of it all using this method. Instead I will have to study up on industry practices and revise all my games. An update with all these changes will be announced once I have patched the game. Hopefully by the end of May or June.
OPCFW_CODE
SaveCache step is not executed the step is supposed to run, looks like it is skipped with the following raw log: 2021-01-25T23:26:07.3363715Z ##[section]Starting: SaveCache 2021-01-25T23:26:07.3373483Z ============================================================================== 2021-01-25T23:26:07.3374104Z Task : Save cache 2021-01-25T23:26:07.3374584Z Description : Saves a cache with Universal Artifacts given a specified key. 2021-01-25T23:26:07.3375015Z Version : 1.0.18 2021-01-25T23:26:07.3375370Z Author : Microsoft Corp 2021-01-25T23:26:07.3375741Z Help : 2021-01-25T23:26:07.3376144Z ============================================================================== 2021-01-25T23:26:08.0490208Z ##[section]Finishing: SaveCache If a step is skipped due to condition not match, the information will be printed out like the one below, so I think this might be an issue with the save cache task itself. I'm getting the same problem. No indication on why it's skipping 2021-05-18T16:25:42.3930535Z ##[section]Starting: Save artifact based on: /azp/agent/_work/r2/a/..../package-lock.json 2021-05-18T16:25:42.3937242Z ============================================================================== 2021-05-18T16:25:42.3937480Z Task : Save cache 2021-05-18T16:25:42.3937690Z Description : Saves a cache with Universal Artifacts given a specified key. 2021-05-18T16:25:42.3937886Z Version : 1.0.18 2021-05-18T16:25:42.3938085Z Author : Microsoft Corp 2021-05-18T16:25:42.3938238Z Help : 2021-05-18T16:25:42.3938420Z ============================================================================== 2021-05-18T16:25:43.6316856Z ##[section]Finishing: Save artifact based on: /azp/agent/_work/r2/a/..../package-lock.json
GITHUB_ARCHIVE
Python is a powerful language. However, the import system might be hard to understand and not just for beginners. It is a little bit challenging. Hopefully, this article will give you some hints. What are imports Do you want to import something in Python? Use import os try: os.mkdir("new") except: print("cannot create new dir ") Any module can access other modules like that. Seems fair enough, what is so complicated? Before we answer that, what exactly is a module in Python? The idea is to reuse blocks of code, making the program more robust and maintainable (modularity). However, it’s not any file containing Python code with a .py extension. Sometimes, developers like to group several instructions into a .py file instead of typing them one by one with the interpreter. In this case, it’s called a script. When the code gets longer, it’s better for maintenance to split it, e.g., function definitions. Otherwise, the code becomes harder to read. Those definitions are available thanks to a unique and central file we call a module. We also write modules because definitions are lost when you leave the Python interpreter. A package is a group of modules (~ folder). There are many different syntaxes for imports. You can import the entire thing: You can import only one or several functions: from os import mkdir The apparent difference is that the second syntax allows for using mkdir directly, whereas the first syntax makes you write os.mkdir every time. Another big difference is that the second syntax does not import os but only There’s another standard syntax that involves aliases: import super_mega_long_module_name as module It is useful to shorten extra-long names and when two packages define different functions with the same name. Anyways, it’s imperative to note that : - imports are case-sensitive - Python runs imported modules and packages I encourage using PEP8 styles for your imports, especially if you are a beginner. It’s a style guide that gives best practices. Never use wildcard imports It would be best if you never wrote something like the following: from MODULE import * You are loading everything contained in said MODULE, which can be a massive amount of code, and it’s the “best” way to get naming collisions. You may get errors or weird behaviors that are difficult to debug. Linters such as Pylint always flag wildcard imports as errors. Imports behind the scene When you write any import statement, the Python interpreter looks first in the module’s cached list: a dictionary that maps module names to modules which have already been loaded If it does not find anything, it looks in the built-in modules (written in C), e.g., math. If it still does not find anything, it uses the sys.path, which is a list of directories that includes several paths, including the PYTHONPATH. When it finds something, it binds the name you use in your statement in the local scope, which allows you to use it and make aliases. Relative vs. absolute imports An absolute import looks like that : from mypackage import mymodule Here is a relative import : from . import mymodule The dot (".") is for the current code file directory. If there are two dots (".."), then it refers to the parent directory. You have to use dots because you must be explicit when making relative import. Python 3 does not allow implicit relative imports. According to the the PEP8 guidelines, absolute imports are the best practice as it’s better for readability, and you should use explicit relative imports only in particular cases: when dealing with complex package layouts where using absolute imports would be unnecessarily verbose It means that the following syntax has very little interest: from mypackage.mysubpackage.mysubsubpackage.mymodule import myfunction Here, an explicit relative import seems legitimate. It’s a trap! Imports can turn nasty. The circular effect happens when A imports B and B imports A. You often get an error. You’d instead refactor your code than trying any hacky workaround. There are other traps on the list, but I prefer debugging instead of listing all cases here. Let’s do it! One of the best options is the “-v” option. It stands for “verbose,” and it can save you a lot of time: python3 -v mymodule.py However, there are more vicious bugs, which are tough to debug. In those cases, you don’t have a lot of choices. A step by step debug is probably your only chance. To do that, use the import pdb; pdb.set_trace() Very useful to add breakpoints and start your investigations. It allows for fixing the context where you put the breakpoint so that Python will execute any expression in that specific context. Here comes the weirdness We just saw it’s better to structure your code with modules and packages when you make Python apps. There’s this file with a strange name, __init__py, you might see it multiple times in blog posts and the documentation. This file can be either empty or full of code to initialize stuff. What the heck is this? Before Python 3.3, it was a mandatory file! If you removed it, Python would not load any submodules from the package anymore. It’s essential to note that Python loads this file first in a module. That’s why developers use it to initialize stuff. Python 3.3 introduced implicit namespace packages, so you can remove this file. It still works in Python 2, though. But wait. It only applies to empty __init__py files. If you need some particular initialization, you may still need that file! One should be extra careful when migrating old code. Thus, it’s pretty wrong to say it’s no longer needed. I would advise not to use implicit namespace packages unless you are perfectly aware of what you are doing. if __name__ == '__main__':? __name__ is a magical variable that holds the name of the current Python module. With the following code : if __name__ == "__main__": # some code here You tell Python to execute the code only when you run it from a CLI with python -m. The code does not run if you import it as a module. Structure your code So Python imports are tricky, but at the same time, it’s good practice to split code into reusable modules? What do we do next? First, remember that not all codes need files, modules, packages, and complex folder hierarchy. A significant part of the work consists of small scripts and command lines. When doing some data science stuff with Python, you do many operations with fancy software such as Jupyter. Python is just the language. The only thing you care about is the data results. However, for an entire application, you probably need some layout. Let’s try some typology. In that case, you are writing a script. Let’s keep things at the same level (in the same folder): myscript/ │ ├── .gitignore ├── myscript.py ├── LICENSE ├── README.md ├── requirements.txt ├── setup.py └── tests.py The structure above is simple on purpose: setup.pyis for dependencies and installation tests.pyis for tests requirements.txtis for other developers that want to use our script. It installs the correct versions of the required Python libraries, the Python package manager (pip) uses it myscript.pyis your code A package is a collection of modules. This time, we will have subdirectories: moduloo/ │ ├── .gitignore ├── moduloo/ │ ├── __init__.py │ ├── moduloo.py │ └── utils.py │ ├── tests/ │ ├── moduloo_tests.py │ └── utils_tests.py │ ├── LICENSE ├── README.md ├── requirements.txt └── setup.py N.B.: It’s probably a good idea to add a docs directory to that structure, but we won’t see that here. Indeed those layouts are quite basic. If you need a more complex structure, e.g., for the web, I recommend looking at web frameworks such as Django. I hope you have a better overview of Python imports. I strongly recommend using PEP8 styles, especially if you are a beginner. Most of the time, you’d better use absolute imports than relative imports. It makes sense only in a few cases, and even in those cases, relative imports have to be explicit. Do not hesitate to use aliases along with your imports. Keep your layout as simple as possible. It’s better for both readability and maintenance.
OPCFW_CODE
I am from an architectural research center, CITA (Centre for IT and Architecture) at the school of architecture in Copenhagen, and we are working on a EU funded research project (duraark.eu), which among other things deals with 3d scanning and point cloud analysis. We are very interested in being able to build our point cloud components through dynamo and python, to be able to test our prototypes directly within the workflow of our stakeholders, e.g. architects. And I have a few questions that one of you might be able to help me with, to get us started How should a C++ dll be managed to be able to call it from within python/dynamo? And how would this be done? Is it possible to access the pointCloud class in Revit through python/dynamo? And how would this be done? Hope you can help me out And thanks very much in advance Hi Henrik, for both queries you made I advice you to start with this page from github: https://github.com/DynamoDS/Dynamo/wiki API is accessible with python nodes, so PointCloud Class is available. looking forward seeing your progress! Thanks for your answer. I will definitely look into the wiki you sent me. great About the Revit API, then I have never tried to work with this. Either in python or dynamo or anywhere else. So if you have some directions or examples on how to work with this and especially the point clouds, then that would be really helpful? Maybe a more specific phrased question. If I have a point cloud file, could be a .e57. How do I import it into Revit, handle it and eventual maybe erase it again from within Dynamo/python? At the moment Dynamo doesn’t have any easy way to import a .e57 point cloud file. If the points are in an easy to parse format, such as a text file with sets of three floats, you could read it in and parse it using a Python file. That said, Dynamo’s current Point implementation is very “heavy weight”, meaning you won’t be able to make hundreds of thousands of Points without running into trouble. I’ve added a user story to our backlog covering importing and displaying pointcloud data such as e57 Taking your second question, it looks as though you can load an .e57 point cloud file using Revit API, see http://help.autodesk.com/view/RVT/2014/ENU/?guid=GUID-B80DBCF1-56A8-4864-A0CD-181466E0EDE8 for a place to start. That allows you to create a Point Cloud Type and Instance within the Revit model, and interrogate the cloud for it’s point data. All possible from Revit API using C# or VB#. In my experience of Python nodes in Dynamo, all the Revit API is available to you automagically via reflection. But, regarding Patrick’s answer, that may not be true in this case. I tend to start by testing things in C# using Macros inside Revit; then perhaps try in RevitPythonShell direct; then see if the same Python code works inside Dynamo. Depends what you’re trying to do with the PointClouds once you have them loaded into Revit: What are you planning to do with them from Dynamo?
OPCFW_CODE