Document
stringlengths
395
24.5k
Source
stringclasses
6 values
const calDescuento = (precio, porcentaje) => total = (precio / 100) * porcentaje; const precioTotal = (precio, porcentaje) => descuento = (precio / 100) * (100 - porcentaje); function realizarDescuento() { const precio = document.getElementById("PRECIO-DEL-PRODUCTO").value; const porcentaje = document.getElementById("DESCUENTO").value; if(porcentaje == false && precio == false){ swal('No ingresaste ningun dato', 'intentalo de nuevo :(', 'error') } else if(precio <= 1){ if(porcentaje <= 1 || porcentaje > 100){ swal('Ingresaste mal el precio y el descuento', 'intentalo de nuevo :(', 'error') } else{ swal('Ingresaste mal el precio', 'intentalo de nuevo :(', 'error') } } else if(porcentaje < 1 || porcentaje > 100){ if(precio < 1){ swal('Ingresaste mal el precio y el descuento', 'intentalo de nuevo :(', 'error') } else{ swal('Ingresaste mal el descuento', 'intentalo de nuevo :(', 'error') } } else{ const descuento = calDescuento(precio, porcentaje) const total = precioTotal(precio, porcentaje) this.total = total.toFixed(2) this.descuento = descuento.toFixed(2) swal('Descuento de ' + this.descuento +'$', 'Precio Total de ' + this.total + '$ con un ' + porcentaje + '% el descuento ', 'success'); } }
STACK_EDU
When you hit a certain level in your product career, a big part of your job becomes managing your team(s) rather than actually doing the work. It’s rare that this shift is ever formally ushered in, and it’s very rare that managers of product managers are ever given formal training. Managers of product managers or product leaders usually end up in those roles because they are great product managers, but great product managers, perhaps counter-intuitively, do not always make great people managers. Great people managers need to be really intentional and transparent about how they manage their direct reports and what is expected from them. Product management was a pretty young discipline when I started, and I wasn’t really managed through much of my career. There were a few reasons for this, one of my managers was across an ocean, another was completely overstretched across 20 reports, a third didn’t come from a product background. Whatever the reason, what I absorbed from this was that my work wasn’t that important to my manager, how I did it even less so, and finally, whatever problems I encountered, I was on my own. This sense of isolation, of the quality of my work being cloaked in mystery, of never “really” knowing how I was performing, had serious consequences for my stress and anxiety levels. Like most product managers, I cared passionately about my work, and I wanted to do well. Without any guidance or insight about how I was evaluated, or where I should focus, I could only infer that I was doing well. I kept being handed important clients, or projects, stakeholders trusted my reports, the products that I delivered performed well with customers. A strong sense of anxiety never left me, and it more often than not led to burn-out and leaving roles because I was exhausted from the guessing game and feeling undervalued. Without a positive example to emulate when I first started to manage people, I reflected on the management characteristics I didn’t want to emulate. One of my worst managers turned up to roughly two one-to-ones for every 10 we should have had, and that was where I wanted to start. Of any of my meetings during a week, my one-to-ones with my direct reports are not allowed to slip. They are your best chance of a temperature check on project health, team health, and an individual’s wellbeing. Consistency with one-to-ones is your tool for building trust with your reports and hence your teams. From that foundation, I thought about where else I had craved direction from my managers, and it boiled down to a few simple thoughts; - I didn’t understand what “good” looked like - I didn’t know where I was on the “good” scale - I was never told what my manager/leadership needed from me With these things in mind, I knew I wanted to create a feedback framework that would help my direct reports answer these questions. A word on feedback cycles: I think deeper feedback should be done quarterly, or at least a few times per year. I’ve never felt like I learned that much from a yearly 360. Extracting feedback at such a high-level meant that the same headlines surfaced again and again, and it was difficult to pinpoint where or how progress had been made. A Framework for Skills Development As I worked on developing my product feedback framework, I focused on two buckets of skills, Strategic and Tactical: - Strategic skills: often referred to as “soft” skills, these skills are the most difficult to execute. They deal with communication, interpersonal skills, emotional intelligence, and organisational alignment. They help to ensure that work moves forwards efficiently and is aligned to organisational goals. - Tactical skills: applied skills common for the facilitation and implementation of product work. These skills can and are used by any member of any team, but are crucial to the success of products and services, hence they are top priority for us. Here are the lists of Strategic and Tactical skills that I expect product folks on my team to learn, implement and eventually master. In my framework, I provide definitions of each, but because they can be so different in different organisations, I’ve left them out here. |Process thinker||Designing solutions| I present this list and their definitions to my direct reports about two weeks before we have a discussion and ask them to evaluate themselves for each skill. In my past evaluations, I was told to rate myself from 1-5 in terms of each skill, but people are not numbers and it felt really cold and calculating to rate someone. Also, what do numbers really mean when it comes to development and progression? So I decided to use a reflection system that felt more humane and positive. Being a learner is not a negative, and being a teacher doesn’t mean that you are ever done. Prior to the session, you ask your report to think about where they think they are the following spectrum: During the feedback sessions, which take about an hour and a half, my direct reports walk me through where they place themselves, and I give my feedback on where I see them, and brainstorm stretch projects or reflection questions for as many as feel relevant. Finally, we chat through three questions: - Which skills are most applicable to your work for the next quarter? - What feels risky right now? - How can you be supported in that? The purpose of these three questions is to help my direct reports understand that they are not expected to improve across all skills by the next quarter. Rather, I want them to be strategic and intentional about their development. I encourage them to focus on the top three skills that will help them most with their current work. The rest can be thought through as situations come up. The last two questions are an opportunity for me to understand where they are worried about the work, so that I can position myself to support them better. Generally, my philosophy as a manager is pretty simple. I want my direct reports to know that I have their backs, that I care about their work, and I care about their mental wellbeing too, Finally, I don’t want to be the thing that anyone goes home and complains about. I would love to hear your thoughts on how other managers help their direct reports set intentions around growth and development. What’s worked and what hasn’t? And please let me know if you apply any of the above framework. I’d love to know how it went.
OPCFW_CODE
Automatic quantitative analysis of morphology of apoptotic HL-60 cells Liu, Yahui; Lin, Wang; Yang, Xu; Liang, Weizi; Zhang, Jun; Meng, Maobin; Rice, John R.; Sa, Yu; Feng, Yuanming Morphological identification is a widespread procedure to assess the presence of apoptosis by visual inspection of the morphological characteristics or the fluorescence images. The procedure is lengthy and results are observer dependent. A quantitative automatic analysis is objective and would greatly help the routine work. We developed an image processing and segmentation method which combined the Otsu thresholding and morphological operators for apoptosis study. An automatic determination method of apoptotic stages of HL-60 cells with fluorescence images was developed. Comparison was made between normal cells, early apoptotic cells and late apoptotic cells about their geometric parameters which were defined to describe the features of cell morphology. The results demonstrated that the parameters we chose are very representative of the morphological characteristics of apoptotic cells. Significant differences exist between the cells in different stages, and automatic quantification of the differences can be achieved. Liu, Yahui, & Lin, Wang, & Yang, Xu, & Liang, Weizi, & Zhang, Jun, & Meng, Maobin, & Rice, John R., & Sa, Yu, & Feng, Yuanming. (January 2014). Automatic quantitative analysis of morphology of apoptotic HL-60 cells. EXCLI Journal, (19-27. Retrieved from http://hdl.handle.net/10342/5730 Liu, Yahui, and Lin, Wang, and Yang, Xu, and Liang, Weizi, and Zhang, Jun, and Meng, Maobin, and Rice, John R., and Sa, Yu, and Feng, Yuanming. "Automatic quantitative analysis of morphology of apoptotic HL-60 cells". EXCLI Journal. . (19-27.), January 2014. July 07, 2022. http://hdl.handle.net/10342/5730. Liu, Yahui and Lin, Wang and Yang, Xu and Liang, Weizi and Zhang, Jun and Meng, Maobin and Rice, John R. and Sa, Yu and Feng, Yuanming, "Automatic quantitative analysis of morphology of apoptotic HL-60 cells," EXCLI Journal 13, no. (January 2014), http://hdl.handle.net/10342/5730 (accessed July 07, 2022). Liu, Yahui, Lin, Wang, Yang, Xu, Liang, Weizi, Zhang, Jun, Meng, Maobin, Rice, John R., Sa, Yu, Feng, Yuanming. Automatic quantitative analysis of morphology of apoptotic HL-60 cells. EXCLI Journal. January 2014; 13() 19-27. http://hdl.handle.net/10342/5730. Accessed July 07, 2022.
OPCFW_CODE
Data science has become increasingly important in various industries, including healthcare, finance, and technology. However, to truly excel in this field, it’s not enough to simply have technical skills. Domain experience is a critical component that can make a significant difference in the outcome of data science projects. What is Domain Experience in Data Science? Domain experience refers to expertise in a particular industry or field. For instance, a healthcare data scientist should have a good understanding of healthcare processes, regulations, and challenges. Similarly, a finance data scientist should have knowledge of financial markets, instruments, and trends. In short, domain experience enables a data scientist to understand the context in which data is generated and used. Why is Domain Experience Important in Data Science? Better Understanding of the Data: Domain experience allows a data scientist to better understand the data they are working with. For instance, they can identify outliers, anomalies, or errors more easily, and know how to address them effectively. Ability to Ask the Right Questions: Having domain expertise helps a data scientist to ask the right questions, and identify the most relevant variables to analyze. This can lead to more effective data modeling, and ultimately, better outcomes. Faster and More Efficient Problem-Solving: When a data scientist has domain expertise, they can recognize patterns and trends in the data more quickly. This allows them to develop solutions faster and with greater accuracy. Data science projects ultimately aim to support decision-making. When a data scientist has domain expertise, they are better equipped to provide insights that are relevant and actionable, leading to more informed decision-making. Real-World Applications and Business Outcomes: Data science projects have real-world applications, and domain expertise can help to ensure that the insights generated are relevant to the business context. This can lead to improved business outcomes, such as increased revenue, cost savings, and process improvements. How to Gain Domain Experience in Data Science? There are various ways to gain domain experience, including working in the industry, collaborating with experts in the field, attending conferences and workshops, and reading industry publications. Additionally, it’s important to stay up-to-date with industry trends and advancements through continuous learning. Several case studies demonstrate the importance of domain expertise in data science. For instance, a data scientist working in the healthcare industry with knowledge of medical terminology was able to develop a more accurate predictive model for patient re-admissions. In another example, a data scientist with expertise in financial markets was able to develop a more effective fraud detection system for a bank. Popular Domain in Data Science Data science is a field that can be applied in a variety of domains and industries. Here are some examples of domains where data scientists can apply their skills: Healthcare: Data scientists can work in the healthcare industry to analyze patient data, develop predictive models for disease diagnosis and treatment, and improve healthcare outcomes. Finance: Data scientists can work in the finance industry to develop models for predicting market trends, analyzing investment risks, and detecting fraudulent activities. Marketing: Data scientists can work in the marketing industry to analyze customer data, develop targeted advertising campaigns, and measure the effectiveness of marketing strategies. Retail: Data scientists can work in the retail industry to analyze customer behavior, optimize supply chain operations, and develop personalized product recommendations. Manufacturing: Data scientists can work in the manufacturing industry to optimize production processes, reduce waste, and improve product quality. Transportation: Data scientists can work in the transportation industry to analyze traffic patterns, optimize routes, and develop predictive maintenance models for vehicles. Energy: Data scientists can work in the energy industry to analyze consumption patterns, develop predictive models for energy demand, and optimize energy production and distribution. These are just a few examples of the many domains where data scientists can apply their skills. With the increasing availability of data and the growing demand for data-driven insights, the opportunities for data scientists to make an impact in various domains are vast and growing. How to gain domain knowledge for Data science Gaining domain knowledge is an important part of becoming a successful data scientist. Here are some ways to gain domain knowledge: Work in the industry: One of the best ways to gain domain knowledge is to work in the industry. This will allow you to gain hands-on experience and learn about the challenges and opportunities within the industry. Collaborate with domain experts: Collaborating with domain experts can help you gain a deeper understanding of the industry and its challenges. This can be done through networking, attending industry events, or collaborating on projects. Read industry publications: Reading industry publications and staying up to date on industry news can help you stay informed about the latest trends and challenges in the industry. Take online courses: There are many online courses and tutorials available that can help you gain domain knowledge. These courses cover a wide range of topics and can be completed at your own pace. Attend workshops and conferences: Attending workshops and conferences is a great way to gain knowledge and network with other professionals in the industry. Work on personal projects: Working on personal projects that are related to the industry can help you gain hands-on experience and develop a deeper understanding of the industry. It’s important to remember that gaining domain knowledge is an ongoing process. As the industry evolves, you will need to continuously update your skills and knowledge to stay relevant and competitive. Selection of Domain Choosing a domain to specialize in as a data scientist can be a challenging task. Here are some steps to help you choose a domain: Identify your interests: The first step is to identify your interests and passions. Think about the industries or fields that you find most interesting and enjoyable. Evaluate demand: Once you have identified your interests, evaluate the demand for data scientists in those domains. Look for job postings and industry reports to determine the demand for data scientists in each domain. Assess your skills: Assess your skills and determine which domains align with your strengths. Consider the tools, programming languages, and techniques required in each domain and assess your proficiency in these areas. Research the industry: Research the industries that align with your interests and skills. Look for industry reports, articles, and whitepapers to gain a deeper understanding of the challenges, opportunities, and trends in each industry. Consider the impact: Consider the impact that you can make in each industry. Think about the potential for your work to make a difference and to have a positive impact on the industry and society. Seek advice: Seek advice from professionals in the industry or from mentors. They can provide valuable insights into the industry and can help you make an informed decision. Choosing a domain is an important decision, but it’s important to remember that you can always pivot and change direction if you find that your chosen domain is not the right fit. As you gain experience and knowledge, you may discover new interests and opportunities that will help guide your career path. Domain experience is a critical component of success in data science. It allows data scientists to better understand the data they are working with, ask the right questions, solve problems more efficiently, and ultimately, provide insights that are relevant to the business context. By gaining domain expertise, data scientists can improve their effectiveness and generate better outcomes for their organizations. - “Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking” by Foster Provost and Tom Fawcett - “Applied Data Science: Lessons Learned for the Data-Driven Business” by Carlos Andre Reis Pinheiro and Anne-Laure Folly - “Data Science in Healthcare: Beyond the Hype” by Mark Ramsey, Ramez Elmasri, and Eric Williams - “Storytelling with Data: A Data Visualization Guide for Business Professionals” by Cole Nussbaumer Knaflic - “Data-Driven: Creating a Data Culture” by Hilary Mason and DJ Patil - “Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data” by EMC Education Services - “Data Science for Non-Technical Business Professionals: 4 Keys to Success” by Anna Johansson - “Building a Data Driven Business: A Practical Guide to Business Intelligence with SQL Server” by Brian Larson - “Data Science for Non-Data Scientists” offered by IBM on Coursera - “Data Science Essentials: Business Case Development” offered by Microsoft on edX - “Introduction to Data Science for Non-Technical Professionals” offered by UC Berkeley Extension on edX - “Data Science Foundations: Knowledge Discovery” offered by IBM on Coursera - “Data Science for Business Leaders: Business Analytics and Data-Driven Insights” offered by Columbia University on edX - “Introduction to Data Science” offered by IBM on edX - “Data-Driven Decision Making” offered by Duke University on Coursera - “Data Science for Executives” offered by Columbia University on edX
OPCFW_CODE
For varying kinds of risk you need varying kinds of security. For an intranet application it may be suitable to use Windows authentication (see Chapter 13 for more information), but for Internet applications you may want to use a more aggressive approach. For example, you may elect to have users log in using forms authentication and further restrict access based on assigned roles for those users. The first step is to authenticate the user . I will demonstrate forms authentication here. (For a good example of forms authentication and roles-based security, refer to the IBuySpy portal code available for download from Microsoft at http://www.asp.net.com.) Forms authentication is just what it sounds like. You are going to provide a login form for the user to enter a user name and password; authenticate that user, and allow access if the person provides valid credentials. There are a few pieces to this puzzle. The first two occur in the Web.config file. We need to modify the Web.config file for the application that requires authentication; specifically we need to modify the <authentication> and <authorization> tags. Additionally, we will need to provide some sort of login form. Finally, we need to set an authorization cookie if the user is authenticated. Listing 15.31 shows the Web.config modifications. Listing 15.31 Modifying the Web.config File for Forms Authentication <authentication mode="Forms"> <forms name=".ASPXAUTH" loginUrl="Login.aspx" protection="All"> timeout="20" </forms> </authentication> <authorization> <deny users="?" /> </authorization> The authentication mode is set to Forms . (The default is Windows integrated.) When we change authentication to Forms we need to include a nested <forms> tag, which specifies the cookie name, a login URL , a protection attribute that describes how the cookie is stored, and an expiration attribute, timeout . In Listing 15.31 we have indicated that the user should be redirected to a page named Login.aspx for authentication. The protection value All means that the authentication cookie is validated and encrypted, and the authentication cookie will expire after 20 minutes. The authorization section is very simply set to deny all unauthenticated users. (The wildcard ? means unauthenticated users.) With these settings all users will be redirected to the Login.aspx page for authentication. (The Login.aspx page is included in CachingDemo.sln .) As mentioned above, if the user is authenticated, we need to set an authorization cookie. Listing 15.32 shows some very basic code for that. Listing 15.32 Setting an Authentication Cookie for Authorized Users Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click ' Authenticate here! FormsAuthentication.SetAuthCookie(TextBox1.Text, True) Response.Redirect(Request.ApplicationPath + "/WebForm1.aspx") End Sub The Button1_Click code is from the Login.aspx page. The code simply authenticates everyone. In a production application you would read the user name and password from some repository. If the supplied user name and password were valid, you would call FormsAuthentication.SetAuthCookie and redirect the user to the requested path . In the listing the second argument to SetAuthCookie indicates that the authorization cookie is persisted , which means the user will still be authenticated after the browser closes and until the cookie expires . Pass False if you want the cookie to reside in-memory and expire when the user closes the browser. Finally, in the Redirect statement I had to add the name of the page because I didn't use one of the default pages for my instance of IIS. Had the page been named default.aspx , I could return the user to the originally requested page with Response.Redirect(Request.ApplicationPath) . This section covers basic forms authentication. To go beyond that, you will need to create an instance of a class that implements IPrincipal , add a string list of roles, and assign this principal object to the Context.User property. The string roles can be used to verify the roles of an authenticated user. There are many levels of security. Necessity will dictate how much more exploration you will need to apply to your specific application. For more on another kind of security, code access security, refer to Chapter 18. For everything you ever wanted to know about .NET security, pick up a copy of .NET Framework Security by Brian LaMacchia et al. .
OPCFW_CODE
QMAKE to CMake Using a custom Virtual Keyboard I'm trying to move on to cmake from qmake. Currently in Qt 5.15.2. So far I can compile the whole app, however, I fail to link the (custom) virtual keyboard plugin.. I use a slight modification of https://github.com/githubuser0xFFFF/QtFreeVirtualKeyboard/tree/master/src . So long I had a .pro file of the keyboard like: QT += qml quick quick-private gui-private TEMPLATE = lib RESOURCES += qml/VirtualKeyboard.qrc SOURCES += src/DeclarativeInputEngine.cpp \ src/VirtualKeyboardInputContext.cpp \ src/VirtualKeyboardInputContextPlugin.cpp HEADERS += src/DeclarativeInputEngine.h \ src/VirtualKeyboardInputContextPlugin.h\ src/VirtualKeyboardInputContext.h OHTER_FILES += KeyButton.qml \ KeyModel.qml \ VirtualKeyboard.qml (note my VirtualKeyboard.qrc contains all qml files) The resulting VirtualKeyboard.dll I used to copy into a folder called platforminputcontexts alongside of the main binary. The main.cpp of the app was like: qputenv( "QT_IM_MODULE", QByteArray( "VirtualKeyboard" ) ); QGuiApplication app( argc, argv ); app.addLibraryPath( QCoreApplication::applicationDirPath() ); and the main.qml like: import VirtualKeyboard 1.0 .. The above was working flawlessly so long using qmake .. So I transformed it all to cmake. It is properly compiling and creating the dll. I have the keyboards CMakeLists.txt like: cmake_minimum_required(VERSION 3.10.0) project(VirtualKeyboardPlugin) find_package(Qt5 REQUIRED COMPONENTS Qml Quick Gui) set(CMAKE_AUTOMOC ON) set(CMAKE_AUTORCC ON) set(CMAKE_AUTOUIC ON) add_definitions(${Qt${QT_VERSION_MAJOR}Quick_DEFINITIONS}) file(GLOB_RECURSE HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/src/*.h ${CMAKE_CURRENT_SOURCE_DIR}/src/*.hpp) file(GLOB_RECURSE SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp) file(GLOB_RECURSE RESOURCES ${CMAKE_CURRENT_SOURCE_DIR}/qml/*.qml) # Add the library add_library(${PROJECT_NAME} MODULE ${SOURCES} ${HEADERS} ${RESOURCES}) set_target_properties( ${PROJECT_NAME} PROPERTIES OUTPUT_NAME "${PROJECT_NAME}" PREFIX "" ) target_link_libraries( ${PROJECT_NAME} PRIVATE Qt${QT_VERSION_MAJOR}::Qml Qt${QT_VERSION_MAJOR}::QuickPrivate Qt${QT_VERSION_MAJOR}::Gui ) However, if I put this "new" .dll into the folder, I get qrc:/main.qml:6:1: module "VirtualKeyboard" is not installed. Is there anything I'm missing while compiling the VirtualKeyboard?
STACK_EXCHANGE
AJAX Interview Questions A list of frequently asked AJAX interview questions and answers are given below. 1) What is AJAX? 2) What are the advantages of AJAX? - Quick Response - Bandwidth utilization - The user is not blocked until data is retrieved from the server. - It allows us to send only important data to the server. - It makes the application interactive and faster. 3) What are the disadvantages of AJAX? - Security issues - Debugging is difficult 4) What are the real web applications of AJAX currently running in the market? 5) What are the security issues with AJAX? - AJAX source code is readable - Attackers can insert the script into the system 6) What is the difference between synchronous and asynchronous requests? Synchronous request blocks the user until a response is retrieved whereas asynchronous doesn't block the user. More details. 7) What are the technologies used by AJAX? - HTML/XHTML and CSS - These technologies are used for displaying content and style. - DOM - It is used for dynamic display and interaction with data. - XML - It is used for carrying data to and from server - XMLHttpRequest - It is used for asynchronous communication between client and server. 8) What is the purpose of XMLHttpRequest? - It sends data in the background to the server. - It requests data from the server. - It receives data from the server. - It updates data without reloading the page. 9) What are the properties of XMLHttpRequest? The important properties of the XMLHttpRequest object are given below. - onReadyStateChange - It is called whenever readystate attribute changes. - readyState - It represents the state of the request. - responseText - It returns response as text. - responseXML - It returns response as XML. - status - It returns the status number of a request. - statusText - It returns the details of status. 10) What are the important methods of XMLHttpRequest? - abort() - It is used to cancel the current request. - getAllResponseHeaders() - It returns the header details. - getResponseHeader() - It returns the specific header details. - open() - It is used to open the request. - send() - It is used to send the request. - setRequestHeader() - It adds request header. 11) What are the types of open() method used for XMLHttpRequest? - open(method, URL) - It opens the request specifying get or post method and URL. - open(method, URL, async) - It is same as above but specifies asynchronous or not. - open(method, URL, async, username, password) - It is same as above but specifies the username and password. 12) What are the types of send() method used for XMLHttpRequest? - send() - It sends get request - send(string) - It sends post request. 13) What is the role of the callback function in AJAX? The callback function passes a function as a parameter to another function. If we have to perform various AJAX tasks on a website, then we can create one function for executing XMLHttpRequest and a callback function to execute each AJAX task. 14) What is JSON in AJAX? 15) What are the tools for debugging AJAX applications? There are several tools for debugging AJAX applications. - Firebug for Mozilla Firefox - Fiddler for IE (Internet Explorer) - MyEclipse AJAX Tools - Script Debugger 16) What are the types of post back in AJAX? There are two types of post back in AJAX. - Synchronous Postback - It blocks the client until the operation completes. - Asynchronous Postback - It doesn?t block the client. 17) What are the different ready states of a request in AJAX? There are 5 ready states of a request in AJAX. - 0 means UNOPENED - 1 means OPENED - 2 means HEADERS_RECEIVED - 3 means LOADING - 4 means DONE 18) What are the common AJAX frameworks? - Dojo Toolkit - Google Web Toolkit (GWT) 19) How can you test the AJAX code? |It requests the server and waits for the response. ||It sends a request to the server and doesn't wait for the response.| |It consumes more bandwidth as it reloads the page. ||It doesn't reload the page so consumes less bandwidth.|
OPCFW_CODE
import pygame #sprite sheet cutter-outer class Cutout(object): def __init__(self, src, w, h): #initialize with the target Sprite sheet, #along with the dimensions of the sprites self.w = w self.h = h self.rsource = pygame.Rect(0,0,w,h) self.srcimg = src self.img = pygame.Surface(self.rsource.size).convert() self.img.blit(self.srcimg, (0,0), self.rsource) self.cktog = False self.ck = (0,255,0) def set_Sheet(self, src, w, h): #change the sprite sheet self.w = w self.h = h self.srcimg = src def set_Img(self, x,y): #pass in the indicies of the desired sprite myx = x*self.w myy = y*self.h self.rsource = pygame.Rect(myx,myy,self.w,self.h) self.img = pygame.Surface(self.rsource.size).convert() self.img.blit(self.srcimg, (0,0), self.rsource) self.cktog = False def get_Img(self): #applies colorkey if self.cktog: self.img.set_colorkey(self.ck) #returns the set image return self.img def set_colorkey(self, tog= True, col= (0,255,0)): #toggles colorkey on or off self.cktog = tog #sets colorkey color self.ck = col
STACK_EDU
Lagged independent variables in economic analysis I am trying to study the effects of foreign direct investment (FDI) in growth of gross domestic product (GDP). It's considered that FDI positively impacts GDP growth and it makes sense to assume that FDI in a particular year will cause GDP growth in following years as the investment produces goods and services, provides jobs and pays taxes not just in the year of investment but also in years following. So does it make sense to use lagged independent FDI variable? And brief instructions regarding doing the analysis in R would be much appreciated. TIA Yes it makes perfect sense to use lagged variables in econometrics models. Practitioners do that all the time. However, you may get more informative results if your data has a faster frequency like quarterly. With annual data, your lag represents a huge amount of time. Is there realistically a full year lag on the impact of FDI on GDP? Intuitively this seems really long. With annual data, I think there would be very little value in exploring more than a lag 1 period given the unit of time that is too large. Meanwhile, with quarterly data you could readily explore the correlations between FDI in the current period up to FDI lag 4 period vs. GDP and get some pretty informative correlations. This study of correlations would give you information regarding what lag to chose. You can chose more than one lag as long as such lags are not that correlated within themselves (often they are not). One of the more straightforward and easy model to develop in econometrics is a multiple regression model. It is very easy to do in R. The coding is pretty straightforward, and would look like this: regression<- lm(gdp ~ fdil1 + fdil2, econdata) The above depicts a regression model object with GDP as the dependent variable and FDI lag 1 & lag 2 as the independent variable. You also need to specify the data frame you are using. In this case, I call it econdata. You can readily extract the main related statistical output of that regression by using the very handy summary() function. And, you are done. However, for your model to make good sense make sure you detrend your variables. An easy way to do that is to transform $GDP into quarterly % change of GDP and do the same for FDI. If you don't do that you will have a "spurious regression" as named by Granger and Newbold in their paper on the subject from 1973. Spurious regressions have R Square close to 1 and a Durbin Watson below 1. They do not have any economic meaning, they simply pick up that both the dependent and independent variables keep on growing over time. And, that the underlying growth trends are highly correlated. But, this is absent of any economic meaning. You could replace your independent variables by a simple trend variable (1, 2, 3,...). And, your model would also have an R Square of close to 1. Yes, it makes absolute sense to do that, and that is a standard technique. In R there are a number of packages for doing so. I would suggest the Econometrics Task View as a good place to start. As a first step, you can create multiple lags (say 1, 2,... 5 years) and the create there correlation matrix to see which one has the best correlation. It's possible to use more than 1 lag as the independent variables, but then you have to worry about auto-correlation, that is correlation between your independent variables which violate the IID assumption of OLS regression. This becomes less of a problem the further separated that lags are. So, using the 1 & 2 year lags would probably be a problem, but 1 & 5 year lags might not be a big problem.
STACK_EXCHANGE
Segmentation fault for test_krr_bob on macOS The test test_krr_bob produces a segmentation fault on macOS (10.14): ================================================================================== test session starts =================================================================================== platform darwin -- Python 3.6.9, pytest-5.1.2, py-1.8.0, pluggy-0.13.0 -- /Users/rmeli/miniconda3/envs/qml/bin/python cachedir: .pytest_cache rootdir: /Users/rmeli/Documents/git/software/qml collected 90 items test/test_acsf.py::test_acsf_1 PASSED [ 1%] test/test_acsf.py::test_acsf_2 PASSED [ 2%] test/test_acsf_linear_angles.py::test_fchl_acsf PASSED [ 3%] test/test_acsf_linear_angles.py::test_acsf PASSED [ 4%] test/test_arad.py::test_arad PASSED [ 5%] test/test_armp.py::test_set_representation PASSED [ 6%] test/test_armp.py::test_set_properties PASSED [ 7%] test/test_armp.py::test_set_descriptor PASSED [ 8%] test/test_armp.py::test_fit_1 PASSED [ 10%] test/test_armp.py::test_fit_2 PASSED [ 11%] test/test_armp.py::test_fit_3 PASSED [ 12%] test/test_armp.py::test_fit_4 PASSED [ 13%] test/test_armp.py::test_score_3 PASSED [ 14%] test/test_armp.py::test_predict_3 PASSED [ 15%] test/test_armp.py::test_predict_fromxyz PASSED [ 16%] test/test_armp.py::test_retraining PASSED [ 17%] test/test_compound.py::test_compound PASSED [ 18%] test/test_distance.py::test_manhattan PASSED [ 20%] test/test_distance.py::test_l2 PASSED [ 21%] test/test_distance.py::test_p PASSED [ 22%] test/test_energy_krr_atomic_cmat.py::test_krr_gaussian_local_cmat PASSED [ 23%] test/test_energy_krr_atomic_cmat.py::test_krr_laplacian_local_cmat PASSED [ 24%] test/test_energy_krr_bob.py::test_krr_bob Fatal Python error: Segmentation fault Thread 0x000070000e7bf000 (most recent call first): File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/threading.py", line 295 in wait File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/queue.py", line 164 in get File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/tensorflow/python/summary/writer/event_file_writer.py", line 159 in run File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/threading.py", line 916 in _bootstrap_inner File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/threading.py", line 884 in _bootstrap Current thread 0x0000000113a955c0 (most recent call first): File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/qml/representations/representations.py", line 314 in generate_bob File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/qml/utils/compound.py", line 249 in generate_bob File "/Users/rmeli/Documents/git/software/qml/test/test_energy_krr_bob.py", line 74 in test_krr_bob File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/python.py", line 170 in pytest_pyfunc_call File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/python.py", line 1423 in runtest File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 117 in pytest_runtest_call File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 192 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 220 in from_call File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 192 in call_runtest_hook File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 167 in call_and_report File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 87 in runtestprotocol File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/runner.py", line 72 in pytest_runtest_protocol File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/main.py", line 256 in pytest_runtestloop File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/main.py", line 235 in _main File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/main.py", line 191 in wrap_session File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda> File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/rmeli/miniconda3/envs/qml/lib/python3.6/site-packages/_pytest/config/__init__.py", line 78 in main File "/Users/rmeli/miniconda3/envs/qml/bin/pytest", line 10 in <module> [1] 18142 segmentation fault pytest -v The segmentation fault comes from subroutine fgenerate_bob in frepresentations.f90 when called by def generate_bob in representations.py. Installation Installation on develop branch. conda create -n qml python=3.6 ipython conda activate qml pip install numpy scipy pandoc ase pandas scikit-learn tensorflow pytest python setup.py build python setup.py install Thanks for the report, I'll investigate the reason for the segfault! Also, thanks for the PR! Really useful addition for the build matrix! I am trying to find someone with a mac that I can borrow in order to get this fixed, likely won't happen this week, unfortunately.
GITHUB_ARCHIVE
...experience or more, or b) three years experience or more but with an incredibly good work ethic. No particular framework required, as I'm just wanting to be coached on my fundamentals, but I do use Angular and Vue - and I'm potentially wanting to learn React. Note that this is an in-person job, so you'll need to be in Delhi - preferably on the Southside We require 5 videos that explain real world applications of topics that are studied in civil engineering. The tutor will explain in a video of around 8 minutes the fundamentals in civil engineering. Topics will be confirmed after approval of your proposal. These videos will be recorded by using power point and video recording software. This project ...information: Title: After School Cheer Club Text Under Title: Join us for a semester of after school cheer club Net Text: Open to all grades Students will focus on the fundamentals of cheerleading including cheers, chants, jumps, tumbling and a full routine. Students will have two opportunities to perform throughout the year- at The Extreme Cheer We are wanting to engage with freelance Technical trainers for below technologies Core Java :Advanc...with freelance Technical trainers for below technologies Core Java :Advance J2EE :Advanced Java - JMS/JDBC OS :Unix/ Linux and Shell scripting Oracle / PL-SQL - Database fundamentals and Pl*SQL Web Development :Web Services : XML/XSL/XPATH :SOAP/HTTP You will be provided a mobile responsive design. You are to take our sales force API that includes Company Name, Company DBA, Storefront Address, Phone Number, Email, Company URL, and sync it all to our own database, and then populate on the OpenStreetMap. Everytime sales force is updated, we need to auto update the site/app. This needs a database ...short lectures to teach members of our website the fundamentals of drawing. The method will be to show in the videos the instructor drawing directly on the paper while he explains what he's doing. Skills needed: -Experience in drawing and creating artworks. -Drawing knowledge in fundamentals such as line, shape, form, space, perspective, value and contrast ...ability to logically and analytically troubleshoot mobile/web applications * Knowledge of general QA procedures and methodologies, as well as software development fundamentals * Basic networking knowledge (IP, NAT, Firewall, Routing) * Demonstrated ability to write clear and reproducible problem reports, and test results * "Essential Duties & Responsibilities • Design, build and maintain efficient, reusable, and reliable code • Contribute to building an agile environment and practice Test Driven Development Daily • Engaged in all aspects of product development and will be working closely with product management, operations, client-engineering and customer success teams Looking for a python expert who can fully customize a python open source application and build more features on top of it. The developer should have basic networking fundamentals covered on python, this is a network play. Bid with confidence, minimum 2 year experience with python is recommended. Cheers! ...DB/NoSql - MongoDB, MySql or any No SQL DB. Job Role/Responsibilities - Strong computer science fundamentals - 1 + years of experience handling large volumes and velocity of data - 2 + years of hands-on experience working with Spark and Hadoop based technologies - 1 + years of experience with data ingestion, processing and extracting value from Tutor is required from civil engineering background to teach fundamentals of building construction material and methods-- from civil engineering background... must have technical expertise thank you ...have good web design and implementation experience, plus strong production skills. Required Skills and Experience: • Must be expert level with major design software (Adobe Creative Suite). • A solid portfolio demonstrating a range of work and a sound understanding of design fundamentals; layout, typography and color. • Design projects ma... Vision: We have a vision to build personality through human values and create the most respected education brand in terms of q...the young buds. Mission: Our mission is to cater thousands of young buds to excel themselves in academics and groom to be a professional of next generation by focusing on fundamentals and logic instead of cramming theories. ...professional training company based in the UK and operating globally. We deliver public and in-house courses (tailored-made), one-on-one training, and e-learning. We address the professional and corporate markets: (a) Corporate clients are banks, insurance companies, financial institutions, industrial and service-based groups, as well as governments
OPCFW_CODE
I was able to pass the task, but I still cannot understand why we need to call the super(fileName) in the constructor. Reading this link gave me some more understanding, and in this case the FileOutputStream class obviously doesn't have a constructor that matches what we need. But it's still not clear for me. Can someone give me a more digested explanation? TIA Why do we need to call the superclass constructor with the fileName? You must be signed in to leave a comment 9 April 2020, 06:04 Why do we care to pass a filename to the BASE constructor (the FileOutputStream) ? We have "copied" it (or overriddden, or implemented, whatever). So we're not planning to use FileOutputStream anymore, we wanted more ;) So we created AmigoOutputStream as a "copy" (extended/copied FileOutputStream) to create the same and more functionality. Now we can use the AmigoOutputStream overridden methods, instead of original FileOutputStream, and don't give a hoot any longer about what the base class was doing, or wanted ? My understanding surely is still very amateur, please comment ? 19 February 2020, 10:24 The keyword super is used to invoke an overridden method (or constructor in this case). Over here, you need to initialise the the FileOutputStream field when using the class's constructor. The problem is, all the FileOutputStream constructors either take a File or a String with a file name as an argument; the AmigoOutputStream constructor only takes a FileOutputStream Object as an argument. So, how do we initialise the FileOutputStream field? We invoke the FileOutputStream class's constructor, and pass fileName as an argument 20 February 2020, 12:18 I still don't see the purpose, I guess I'm missing something. In the main method of the AmigoOutputStream class we have:So, we are passing the fileName to the FileOutputStream constructor. All good so far, we create a new FileOutputStream with fileName and pass that object to the AmigoOutputStream constructor. So then, why do we call super(fileName) if we already have created a new FileOutputStream? Does my questions makes sense? 20 February 2020, 13:03 From the article I linked: "Note: If a constructor does not explicitly invoke a superclass constructor, the Java compiler automatically inserts a call to the no-argument constructor of the superclass. If the super class does not have a no-argument constructor, you will get a compile-time error." As FileOutputStream doesn't have a no-argument constructor, it would cause an error 27 March 2020, 11:12 I hope somebody will correct me if i am wrong, but i will try explain to put it in my own words. When you extend the FileOutputStream class, the first think that happens when an object of the class which inherits is initialized, the BASE class constructor is called. In this case the base-class constructor MUST HAVE a parameter because there is no default constructor without parameter. Therefore the parameter we are passing to AmigoOutputStream doesn't matter at this stage. 30 August 2020, 08:15 Anthony Chalk's (user 10482029) answer is the solution to this question
OPCFW_CODE
Subject: Re: [boost] [review] Multiprecision review scheduled for June 8th - 17th, 2012 From: John Maddock (boost.regex_at_[hidden]) Date: 2012-06-04 04:25:36 >> * I think that the fact that operands of different backends can not be >> mixed on the same operation limits some interesting operations: >> I would expect the result of unary operator-() always signed? Is this >> operation defined for signed backends? > It is, but I'm not sure it's useful. >I don't reach to find it now on the documentation for mp_number, neither on >the code. Could you point me where it is defined? In code? mp_number_base.cpp#583. In the docs, looks like I missed it :-( Will add (likewise unary +). > It's a mp_uint128_t, and the result is the same as you would get for a > built in 128 bit unsigned type that does 2's complement arithmetic. This > is intentional, as the intended use for fixed precision cpp_int's is as a > replacement for built in types. >I could understand that you want the class cpp_int behave as the builtin >types, but I can understand also that others expect that a high level >numeric class shouldn't suffer from the inconveniences the builtin types >suffer and be closer to the mathematical model. I expected mp_number to >manage with these different expectation using a different backend, but >maybe my expectations are wrong. There are a lot of possible different behaviour possible and only a limited amount of time :-( At this stage cpp_int is intended to be a basic "proof of principle" implementation, useful, but it doesn't provide everything that could be >> It would be great if the tutorial could show that it is possible however >> to add a mp_uint128_t and a mp_int256_t, or isn't it possible? >> I guess this is possible, but a conversion is needed before adding the >> operands. I don't know if this behavior is not hiding some possible > Not currently possible (compiler error). >Why? mp_uint128_t is not convertible to mp_int256_t? Because we deliberately choose not to provide it. On a technical level the code looks like: template <class Backend> some-return-type operator+(const mp_number<Backend>&, const So the operator overload can not be deduced for differing backends. Interestingly, had these been non-templates then it would have worked as you expected (the conversion would have been found). > I thought about mixed operations early on and decided it was such a can of > worms that I wouldn't go there at this time. Basically there are enough > design issues to argue about already ;-) >As for example? Well, you've raised quite a few ;-) Interface, naming convensions, expression templates, scope.... > However, consider this: in almost any non-trivial cenario I can think of, > if mixed operations are allowed, then expression template enabled > operations will yield a different result to non-expression template > operations. Why? could you clarify? Consider the Horner polynomial evaluation example: a = (c1 * x + c2) * x + c3; Expression templates transform this into: a = c1 * x; // evaluated in place using "a" as temporary storage if a += c2; a *= x; a += c3; Now suppose that the constants cN, x and a all have different precisions. Rounding will change depending whether you evaluate using temporaries as "(c1 * x + c2) * x + c3" and then assign (and possibly round) to "a", or evaluate in place as above. > In fact it's basically impossible for the user to reason about what > expression templates might do in the face of mixed precision operations, > and when/if promotions might occur. For that reason I'm basically against > them, even if, as you say, it might allow for some optimisations in some >It is not only an optimization matter. When working with fixed precision it >important to know what is the result type precision of an arithmetic >operation that don't loss information by overflow or resolution. Right. So make sure you're using the correct type. Basically we're saying "if you want to do mixed precision arithmetic, then you have to decide (by using casts) which of the precisions is the correct one, and specify that explicitly in the code". > I don't understand, how is that different from the number of decimal >Oh I got it now. the decimal digits concern the mantissa and not the digits >of the fractional part? Yes, all the digits in the mantissa, both the whole and fractional parts. Remember this is *floating* point, not fixed point. >> * What about adding Throws specification on the mp_number and backend >> requirements operations documentation? > Well mostly it would be empty ;-) But yes, there are a few situations > were throwing is acceptable, but it's never a requirement. >By empty, do you mean that the operation throws nothing? if yes, this is a >important feature and/or requirement. No, I mean we have nothing to say about it - the front end doesn't *require* that the backend do any particular thing, throw or not throw. It's up to the backend to decide if throwing is or is not appropriate. >> BTW, I see in the reference "Type mp_number is default constructible, and >> both copy constructible and assignable from: ... Any type that the >> Backend is constructible or assignable from. " >> I would expect to have this information in some way on the tutorial. > It should be in the "Constructing and Interconverting Between Number > Types" section of the tutorial, but will check. >I didn't find it there. It's rather brief, but it's the last item: " Other interconversions may be allowed as special cases, whenever the backend allows it: mpf_t m; // Native GMP type. mpf_init_set_ui(m, 0); // set to a value; mpf_float i(m); // copies the value of the native type. There's more specifics in the tutorial for each backend, for example the mpfr section has: "As well as the usual conversions from arithmetic and string types, instances of mp_number<mpfr_float_backend<N> > are copy constructible and The GMP native types mpf_t, mpz_t, mpq_t. The MPFR native type mpfr_t. The mp_number wrappers around those types: mp_number<mpfr_float_backend<M> >, mp_number<mpf_float<M> >, >> If not, what about a mp_number_cast function taking as parameter a >> rounding policy? > I think it would be very hard to a coherant set of rounding policies that > were applicable to all backends... including third party ones that haven't > been thought of yet. Basically ducking that issue at present :-( >Could we expect this as an improvement for future releases? I hope we can improve cpp_dec_float at some point, a coherent interface for all types I suspect may elude us as we can't force a particular backend that's mostly third party implemented to follow some model. It would be pretty hard to impose a rounding model on gmp's mpf_t for example - short of reimplementing mpfr - and I can't see us ever doing that! > Yes, but it's irrelevant / an implementation detail. The optional > requirements are there for optimisations, the user shouldn't be able to > detect which ones a backend choses to support. >Even the conversion constructors? OK, you got me on those those, I was thing of say the various eval_add / _multiplty / _subtract overloads which are just there to optimise certain >> * Is there a difference between implicit and explicit construction? > Not currently. >So, I guess that only implicit construction is supported. I really think >that mp_number should provide both constructors if the backed provide them. Yes, it's now on the list of things to try and implement - I want to keep the code pretty stable at present though as the review is iminent. > I don't know, I'd have to think about that, what compilers support that >gcc and clang atleast. Does msvc 11? I don't think so. >> * Are implicit conversion possible? > To an mp_number, never from it. >Do you mean that there is no implicit conversion from mp_number to a Correct, that would be unsafe and surprising IMO. >> * Do you plan to add constexpr and noexcept to the interface? After >> thinking a little bit I'm wondering if this is this possible when using >> 3pp libraries backends that don't provide them? > I'm also not sure if it's possible, or even what we would gain - I can't > offhand think of any interfaces that could use constexp for example. >It depends on the backend. But construction from builtins and most of the >arithmetic operations could be constexpr. I'd have to think about that and experiment (it's fixing up the internals to be constexp safe that could be tricky). The expression template arithmetic ops couldn't ever be constexp, possibly the non-expression template ones could be though. > Good question, although: > * I think it's pretty common to write "mynumber << 4" and expect it to >Is it so hard to write "mynumber << 4u"? Hard no, surprising that you have to do that yes. I can just hear the support requests comming in now... >> * Why the "Non-member standard library function support" can be used only >> with floating-point Backend types? Why not with fixed-point types? > Because we don't currently have any to test this with. >Well, you can replace the documentation to say just that. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
OPCFW_CODE
Are read only partitions safe from corruption if there's also a read/write partition on the same sd card? I want to make my operating system partitions read only, but also to have a separate small partition to occasionally write data. Will the read only OS partitions be safe if the read/write partition gets corrupted from power removal while writing? Or does the entire sd card get corrupted? If the read only partitions are actually safe in this case, then it should be possible to detect on boot when the read/write partition has become corrupted and reformat it. edit: This is for a special application where the RPi will never be "shut down" properly, it will just be powered off. I have experience making bootable read-only systems on the RPi and that works great, but I now need a way to keep the system safe from sd corruption while also being able to write a small amount of data sometimes. This is the answer I received from user farptr on reddit, which is what I suspected was the case: There are two kinds of corruption involved. The first is filesystem level corruption where the kernel is holding a file change in memory still and hasn't written it out yet. If you interrupt it by losing power then your storage will be missing or have corrupted data. Your read only partition would be safe from this. Even if it was corrupted, you could repair it by reformatting the SD card. The second is internal metadata inside the SD card itself. The SD card is actually a quantity of flash memory with a tiny controller attached to it which is basically a customised CPU. The internal controller basically runs its own internal filesystem with metadata that tracks what blocks are bad, how much each block was written to etc... Interrupting these write operations will also cause corruption and if it is bad enough then the internal controller will lock up or act strangely. The problem is that you can't wipe the internal SD card metadata so if the card may be permanently unusable. Your read only partition won't help with this. What I'm going to do is set up the system as I originally described with the primary partitions readonly and one additional partition that is writeable, and I will write data to that partition as infrequently as possible -- only when the user makes changes to settings. My hope is that it will be extremely rare that the RPi will lose power at the exact moment that the one data file is being written. Yes, if you have properly flagged the boot partition to be read-only in software then the write-protect switch on the card is irrelevant. As for the detection and reformatting part, that would rarely happen as long as you don't just randomly lose power to the unit. Even then, Raspbian is good about keeping the write caches flushed out to disk. Only if you lose power while writing to a directory would you expect sudden loss of power to do damage to it. The best detection method, then, would be to check for the presence of a file you know is there. If the open fails, then you could trigger a reformat. Or it might hang the machine. In any case, be sure to use Win32DiskImager to make a .img backup so you can always just restore your SD card to a known, good state. Sorry if I was unclear, but I was not asking about the hardware switch on the sd card Your question is futile. You don't even NEED to even mount the boot partition, I presume you mean /. Normal Linux writes to the filesystem all the time and won't work if it is not writable. It is possible to put these elsewhere (usually tempfs). You need to distinguish the causes of "corruption". Most often this is just a problem, experienced by all computers which fail while writing. This is usually fixed by journalling. In rare cases powering off while the SD Card firmware is performing housekeeping can cause "corruption". In fact, the instance of "corruption" is rare (and it never seems to happen to the experienced users). I have had 2 SD Card problems with 5 Pi, 12 SD Cards over 4 years. One was a Card which totally failed after a few days use, and was replaced under warranty. The other was a failed update. Mind you, I have experienced MANY problems due to operator error! This is probably cause of most reported problems. The appropriate remedy here is BACKUP. In summary I suggest you don't bother trying to quarantine your OS. Always poweroff safely and backup regularly. This is a use case where the RPi will never be shut down correctly. I have experience creating a read-only file system which works fine, but I now also need a way to also write a small amount of data sometimes.
STACK_EXCHANGE
Application lifestyle Management is an umbrella nowadays that covers several different disciplines or the sectors, which were traditionally conceptualized to work separately. The process of Application Lifecycle Management in one hand takes care of the project management, requirements management and development of software and on the other hand, it includes the testing, quality assurance and even the customer support and IT service delivery as well. Application Lifecycle Management Application Lifecycle Management by virtue of its basis acts as a tool that provides a standardized environment for communication and collaboration between the software development team of a company and the team for test and operations as well. Previously by the waterfall approach of the software development companies, they found it hard to match up the deadlines and suffered from the proneness of cost overruns and several other related issues. After the publication Agile Manifesto, the software development organizations realized that the integration of the two teams namely the development and the operations team could efficiently define the requirements. Salesforce dx is a recognized software used for developing a direct relationship with the customers. In this process, the collaborated team plans the releases and sprints, tests the product during development and ultimately deploy the latest update in a seamless way. Thus Application Lifecycle Management is a fusing together of the disciplines concerned for all of the aspects of the software development and delivery process. In the following section, I would discuss the characteristics and functions of an ideal Application Lifecycle Management system. Characteristics of an Ideal Application Lifecycle Management - The first characteristic of the system is of the Requirements Management. An Application Management tool helps you to make sense of your requirements and ideally speaking it should be adaptable to your preferred methodology and the processes of yours instead of the other ways. - The second one is that by the functions of it an Application Lifecycle Management tool may help you to estimate and plan your project typically. And which tool you need depends on the level of planning you require. - Thirdly, an Application Lifecycle Management tool may provide you with integrated source code management functionality; the tools might ideally help you by offering flexibility to support different branching and merging models. - An Application Lifecycle Management tool is capable of providing you with at least a test case management that is at the least it helps you to create and manage your test cases in folders with sorting and filtering capabilities. - In the fifth part, I can mention that most of the tools let you integrate with continuous integration servers. The Application Lifecycle management tools either carry out a customer support capability or at least can integrate with other help desks. There are also attributes which characterize the tools such as, Project and Portfolio Management, Collaboration and Communication. Assurance of compliance and Governance Tan ideal application Lifecycle Management system helps you to track any and every change which may occur across the delivery chain; it also then offers you the assistance of implementing proper control of access but without any sacrifice of visibility. Improving the Throughput The agile practice of the tools makes the Developers free to focus on developing better solutions. Lastly, I would like to conclude by saying that the ideal Application Lifecycle Management tools can adopt continuous integration to improve the code quality and also capable of increasing the consistency and speed by eliminating the wastes caused by annual processes.
OPCFW_CODE
Scrolling a Canvas smoothly in Android I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents? Done a bit more googling this morning. According to this web page http://markmail.org/message/oedvjxi3dhokzq23 I can have a second canvas, so I'll explore that idea. See more about this in a new "answer" below. I had this problem too, I did the drawing like this: Canvas BigCanvas = new Canvas(); Bitmap BigBitmap = new Bitmap(width,height); int ScrollPosX , ScrollPosY // (calculate these with the onScrollEvent handler) void onCreate() { BigCanvas.SetBitmap(BigBitmap); } onDraw(Canvas TargetCanvas) { // do drawing stuff // ie. BigCanvas.Draw.... line/bitmap/anything //draw to the screen with the scrolloffset //drawBitmap (Bitmap bitmap, Rect src, Rect dst, Paint paint) TargetCanvas.DrawBitmap(BigBitmap(new Rect(ScrollPosX,ScrollPosY,ScrollPosX + BigBitmap.getWidth(),ScrollPosY + BigBitmap.getHeight(),new Rect(0,0,ScreenWidth,ScreenHeight),null); } for smooth scrolling you'd need to make some sort of method that takes a few points after scrolling (i.e the first scroll point and the 10th) , subtract those and scroll by that number in a for each loop that makes it gradually slower ( ScrollAmount - turns - Friction ). I Hope this gives some more insight. Thanks, Mervin, for your reply. I'm not sure I understand exactly how your scrolling works. Do you wait until 10 scroll events have been received before you start to move the bitmap? I had imagined that, in order to make the screen display feel fully responsive I would need to redraw bitmap onto the Canvas as soon as the first scroll event was received. And, because my "drawing stuff" is quite time-consuming, I decided to move this out of the onDraw method in an attempt to make the program as responsive as possible to the scroll. I seem to have found an answer. I have put the bulk of the drawing code (which was previously in onDraw()) in a new doDrawing() method. This method starts by creating a new bitmap larger than the screen (large enough to hold the complete drawing). It then creates a second Canvas on which to do the detailed drawing: BufferBitmap = Bitmap.createBitmap(1000, 1000, Bitmap.Config.ARGB_8888); Canvas BufferCanvas = new Canvas(BufferBitmap); The rest of the doDrawing() method is taken up with detailed drawing to BufferCanvas. The entire onDraw() method now reads as follows: @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawBitmap(BufferBitmap, (float) -posX, (float) -posY, null); } The position variables, posX and posY, are initialised at 0 in the application's onCreate()method. The application implements OnGestureListener and uses the distanceX and distanceY arguments returned in the OnScroll notification to increment posX and posY. That seems to be about all that's needed to implement smooth scrolling. Or am I over-looking something!? One thing that I have discovered since writing this "answer" is that extra code is needed to "recycle" the buffer bitmap (and any other bitmap objects) when the screen is rotated. This is because when the screen is rotated the app's main activity is ended and restarted but the memory used by Bitmap objects is not automatically recovered by the system. So, it seems to be important to call the recycle() method for every bitmap when it is no longer needed. In the case of the buffer bitmap the answer seems to be to override the Activity's onDestroy method and to call the bitmap's recycle() in it. This code can be avoided simply by putting android:configChanges="orientation" in the Manifest. See Ribo's answer to this question dated 2011-01-04. please help in this question http://stackoverflow.com/questions/11720702/canvas-zoom-in-partially/11721193#11721193 Sorry, kamal and WildBill, to be slow in responding. I haven't looked at this thread recently. I'm glad you've solved your problem, kamal. WildBill, I first call doDrawing() from the main Activity's onCreate() after the View has been created; and I call doDrawing() again whenever the image in the buffer bitmap needs to be changed. No need for the activity to be restarted! (Per prepbgg's Jan 27 10 reply to his Jan 17 10 'answer') Rather than recycling the bitmap and incurring the overhead of having the activity reloaded, you can avoid having the application loaded by putting the 'android:configChanges' attribute shown below, in the 'activity' element of the AndroidManifest.xml file for the app. This tells the system the the app will handle orientation changes and that it doesn't need to restart the app. <activity android:name=".ANote" android:label="@string/app_name" android:configChanges="orientation|screenLayout"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> This method can be used to get a notification when the orienation is changed: public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); prt("onConfigurationChanged: "+newConfig); if (newConfig.orientation == Configuration.ORIENTATION_PORTRAIT) { prt(" PORTRAIT"); } else { prt(" LANDSCAPE"); } } // end of onConfigurationChanged This is very interesting. (I did, in fact, at a fairly early stage stop the app from changing orientation by putting android:screenOrientation="portrait" in the Manifest. I did this to stop the memory problems.) However, I would like to handle rotations if possible. When you talk of the app handling orientation changes, what do I need to do to achieve this? Will the View's onDraw method still be called automatically? Or do I simply, perhaps, respond to the rotation by "invalidating" the view? I've just replaced android:screenOrientation="portrait" in the Manifest with android:configChanges="orientation" and the app just works! (No need to have any code to check for an orientation change ... the screen gets redrawn without any intervention from me.) LogCat suggests that the phone spends about 250 ms in garbage collection each time the orientation is changed. I can't yet see any problems. Thank you very much for coming here with this suggestion. Continuation of reply to Viktor ... In fact, the situation is more complicated. Because the doDrawing process is quite slow (taking 2-3 seconds on my slow old HTC Hero phone) I found it desirable to pop up a Toast message to advise the user that it was happening and to indicate the reason. The obvious way to do this was to create a new method containing just 2 lines: public void redrawBuffer(String strReason) { Toaster.Toast(strReason, "Short");` doDrawing(); } and to call this method from other places in my program instead of doDrawing(). However, I found that the Toaster either never appeared or flashed up so briefly that it could not be read. My workaround has been to use a time check Handler to force the program to sleep for 200 milliseconds between displaying the Toast and calling doDrawing(). Although this slightly delays the start of a redraw I feel this is a price worth paying in terms of the program's usability because the user knows what is going on. reDrawBuffer() now reads: public void redrawBuffer(String strReason) { Toaster.Toast(strReason, "Short"); mTimeCheckHandler.sleep(200); }` and the Handler code (which is nested within my View class) is: private timeCheckHandler mTimeCheckHandler = new timeCheckHandler(); class timeCheckHandler extends Handler { @Override public void handleMessage(Message msg) { doDrawing(); } public void sleep(long delayMillis) { this.removeMessages(0); sendMessageDelayed(obtainMessage(0), delayMillis); } }` prepbgg: I don't think the code will work because canvas.drawBitmap does not draw into the bitmap but draws the bitmap on-to the canvas. Correct me if I am wrong! Thanks, MasterGaurav. I agree that canvas.drawBitmap(BufferBitmap, ...) does not draw anything into BufferBitmap. However, I have a separate doDrawing() method where things (lines, bitmaps, etc.) are drawn into BufferCanvas and thereby into BufferBitmap. The code does seem to work (although it does occasionally stutter and/or crash ... I don't know whether that is because my drawing code is faulty or whether there are other errors.)
STACK_EXCHANGE
<?php /** * This file is part of the DmishhPagerBundle package. * * (c) 2013 Dmitriy Scherbina * * For the full copyright and license information, please view the LICENSE * file that was distributed with this source code. */ namespace Dmishh\Component\Pager\Tests; use Dmishh\Component\Pager\Pager; class PagerTest extends \PHPUnit_Framework_TestCase { public function testDefaults() { $pager = new Pager(array()); $this->assertEquals(1, $pager->getPage()); $this->assertEquals(10, $pager->getItemsPerPage()); } public function testLimitAndOffset() { $pager = new Pager(array(), 1, 10); $this->assertEquals(0, $pager->getOffset()); $this->assertEquals(10, $pager->getLimit()); $pager = new Pager(array(), 2, 10); $this->assertEquals(10, $pager->getOffset()); $this->assertEquals(10, $pager->getLimit()); $pager = new Pager(array(), 3, 50); $this->assertEquals(100, $pager->getOffset()); $this->assertEquals(50, $pager->getLimit()); } public function testCountable() { $itemsCount = 50; $items = Util::generateItems($itemsCount); $pager = new Pager($items, 3, 10); $this->assertEquals($itemsCount, $pager->getItemsCount()); $this->assertEquals($itemsCount, count($pager)); } public function testArrayAccess() { $itemsCount = 15; $itemsPerPage = 5; $items = Util::generateItems($itemsCount, 'index', 'value'); $pager = new Pager($items, 1, $itemsPerPage); $this->assertTrue(isset($pager['index0'])); $this->assertEquals($items['index0'], $pager['index0']); // Pager has readonly access — assigning ignored $pager['index0'] = '123123'; $this->assertEquals($items['index0'], $pager['index0']); // Pager has readonly access — unsetting ignored unset($pager[0]); $this->assertTrue(isset($pager['index0'])); $this->assertEquals($items['index0'], $pager['index0']); } public function testIterator() { $itemsCount = 15; $items = Util::generateItems($itemsCount, 'index', 'value'); $pager = new Pager($items, 2, 5); foreach ($pager as $key => $item) { $this->assertEquals($items[$key], $item); } } public function testPageOutOfRange() { $pager = new Pager(array(), 1, 5); $this->assertFalse($pager->isPageOutOfRange()); $pager->setPage(2); $this->assertTrue($pager->isPageOutOfRange()); $itemsCount = 20; $items = Util::generateItems($itemsCount, 'index', 'value'); $pager = new Pager($items, 1, 5); $this->assertFalse($pager->isPageOutOfRange()); $pager->setPage(2); $this->assertFalse($pager->isPageOutOfRange()); } public function testHasToPaginate() { $items = Util::generateItems(5, 'index', 'value'); $pager = new Pager($items, 10); $this->assertFalse($pager->hasToPaginate()); $items = Util::generateItems(10, 'index', 'value'); $pager = new Pager($items, 10); $this->assertFalse($pager->hasToPaginate()); $items = Util::generateItems(11, 'index', 'value'); $pager = new Pager($items, 10); $this->assertTrue($pager->hasToPaginate()); } }
STACK_EDU
The system context diagram (also known as a level 0 DFD) is the highest level in a data flow diagram and contains only one process, representing the entire system, which establishes the context and boundaries of the system to be modeled. Thus, it is a high-level view of a system that defines the boundary between the system, or part of a system, and its environment, showing the external entities that interact with it. A context diagram is typically included in a requirements document that is used early in a project to get agreement on the scope under investigation. It must be read by all project stakeholders and thus should be written in plain language, so that the stakeholders can understand items. System context diagrams. The objective of the system context diagram is to focus attention on external factors and events that should be considered in developing a complete set of systems requirements and constraints. A context diagram gives an overview and it is the highest level in a data flow diagram, containing only one process (i.e. ATM) representing the entire system. It should be split into major processes that give greater detail and each major process may further split to give more detail. For example, ATM is decomposed into several smaller processes in a lower level DFD. In this way, the Content Diagram or Context-Level DFD is labeled a “Level-0 DFD” while the next level of decomposition is labeled a “Level-1 DFDs”, the next is labeled a “Level-2 DFDs”, and so on. Identify System Scope with Context Diagram? A context diagram is also known as the context data flow diagram or level 0 data flow diagram which helps you to identify scope, potential stakeholders, and build a better understanding of the context in which you are working. It shows the interactions between a system and other actors (external entities) with which the system is designed to interface. It typically used early in a project to get agreement on the scope and can be included in a requirements document. A context diagram is designed to be an abstraction view, showing the system as a single process with its relationship to external entities. It represents the entire system as a single bubble with input and output data indicated by incoming/outgoing arrows. The system – the top-most process can be broken down into smaller and simpler parts or a lower-level DFD. Each of these can be broken down further (e.g. level 1 DFD, Level-2 DFD and so on). Once you’ve reached the lowest level of decomposed pieces of a subsystem, developers can think about how to start coding those functions. Advantages of DFD - Give a functional overview and the boundaries of the system. - Communicate the existing system knowledge to the stakeholders with a simple visual representation. - Provide a functional breakdown of the system
OPCFW_CODE
What is the difference between Lazarus and CodeTyphon Firstly, I saw some topics about these two but weren't my answer. I'm looking for a good FPC(Free Pascal Compiler) IDE on GNU/Linux. There are some IDE's like Lazarus and CodeTyphon. I need suggestion to choose one of those. I've tried Lazarus once but all windows was separated. It looks messy and not interesting. I would like to know what are the distinguishes between these two ? I would like to know advantages / disadvantages each of those. Thank you Using Glass docking from CT in Lazarus can make Lazarus look the way you want (http://www.pilotlogic.com/sitejoom/index.php/forum/general-discussions/2625-giving-glassdocking-a-second-chance#4574). Using FPCUP can help you install/update/maintain several Lazarus versions (like FPC 2.6.2 + Laz Trunk, or FPC 2.7.1 + Laz 1.0.12, or FPC Trunk + Laz Trunk...). FPCUP can be found here: https://bitbucket.org/reiniero/fpcup CodeTyphon is a distro of Lazarus, like Ubuntu and Debian are distros of Linux. CodeTyphon comes with a large package of components and plugins, that otherwise you would have to google and download and install. CodeTyphon have their own idea what are stable versions and what are not stable yet for both of FPC (compiler) and Lazarus(IDE). Whether their assessment is better or worse than upstream's Lazarus Team's, I don't know. What about one-single-window plugin, it is work-in-progress and it doesn't seems to me it is ready for production use, no matter would you get it as part of CT or download and add it to vanilla Lazarus. However maybe it better works on Linux than on Windows, I don't know. There were however issues with code legality in CT grande bundle. It is widely believed that Orca (if I remember the name) violates copyrights of glScene/vgScene, which also happened in early Delphi FMX releases but was fixed by EMBA later. There also were disputes in FPC forums/wiki about CodeTyphon pirating some open-source components. See answer by Peter Dunne below. +1 from my limited experience, Code Typhon is what made Lazarus + FPC usable. I couldn't get anything working prior to that. Basically Code Typhon only prepares crosscompiling and a bunch of externally sourced components. Normal Lazarus Windows installers work out of the box, and have for years Your question is akin to asking the difference between Linux and Ubuntu. Lazarus is an IDE/component library, based on FreePascal (FPC). And CodeTyphon is a distribution of Lazarus and FPC. So CodeTyphon is just one way to install a functioning installation of Lazarus. Lazarus uses the same floating window design as older versions of Delphi. Installing from CodeTyphon won't change that. CT has an experimental plugin to convert IDE into single-window design. Which, of course, can be installed into vanilla Laz as well. Hardly usable though. @HamedKamrava dunno, it is just already there in CT. Perhaps Anchor Docking? see https://www.google.ru/search?client=opera&q=lazarus+single+ide+window&sourceid=opera&ie=utf-8&oe=utf-8&channel=suggest @arioch I could not understand that part of your answer. I get it now. @HamedKamrava it is actually called pl_GlassDocking, in order to use it you should drag the seperate window using the line at the side of the window (its color is light yellow by default) and you will see that you can attach two window together with it. @HamedKamrava, For combining lazarus windows, you can install "KZ Desktop" plugin in Lazarus. Please look at this: http://www.raphaelz.com.br/ Myself and several friends highlighted several licensing issues with codetyphon most of which could have been corrected by sourcing the included files from known good source and ensuring the correct license headers were included PirateLogic refused to correct the issues which means they are using code in direct violation of the original license terms The fact its open source code does not change the fact they are pirating the code by not including the correct license even after the issue was highlighted I also found several instances of copyright code included which appears to be proprietary and not FOSS at all They also changed the path & file names on some libraries so that source is no longer compatible with standard lazarus/component installs This in my view is totally illogical These 2 factors heavily undermine what was potentially the best FPC/Lazarus distro Hardly professional Lazarus can be a daunting installation process due to it's nature as a cross compiling environment. You don't just download an installer and click ok. A typical "installation" is actually a bootstrap FPC compiler doing a three-pass compilation of an "install". There are plenty of good installation scripts/methods from the official Lazarus/FPC team and in the community for a . But, understandably, the installation process is a skill in itself. CodeTyphon is a a different/separate branch of an installer system, which is more of a utility suite/tools/third party code compilation library. If you want the simplest installation experience go with CodeTyphon. It has the nice graphical front end for managing the compiler. You can conveniently do the fancy stuff like build "cross-compilers" for almost every "target" operating system out there. It also is jam packed with hundreds of the best components/libraries pre-installed. It is a very actively maintained project and very professional. A whole lot of work is done for you. Even if you want to be learn the low level compiler capabilities, CodeTyphon is a good place to start. It is written in FCP/Lazarus and is open source. Simply study it as "working demo app" and the other info on the compiler details. If you crash it, at least you don't have to learn to climb the hill. You get to get to start from the top and lose control on the way down. Start from scratch (and a three hour reinstallation) Hahaha Note that daunting only for /cross/ purposes. Normal Lazarus installers on target (and also win32->win64) are pretty straightforward. Lazarus also has a package "AnchorDock" which allows you to dock all the windows into one. Either install the anchor dock design package after installing Lazarus, or install Lazarus using the script at getlazarus.org which will do it for you.
STACK_EXCHANGE
Kernel Derivatives There's two components to this enhancement. Optimization Define a theta and eta (inverse theta) function to transform parameters between an open bounded interval to a closed bounded interval (or eliminate the bounds entirely) for use in optimization methods. This is similar to how link functions work in logistic regression - unconstrained optimization is used to set a parameter value in the interval (0,1) using the logit link function. [x] theta - given an interval and a value, applies a transformation that eliminates finite open bounds [x] eta - given an interval and a value, reverses the value back to the original parameter space [x] gettheta returns the theta transformed variable when applied to HyperParameters and a vector of theta transformed variables when used on a Kernel [x] settheta! this function is used to update HyperParameters or Kernels given a vector of theta-transformed variables [x] checktheta used to check if the provided vector (or scalar if working with a HyperParameter) is a valid update [x] upperboundtheta returns the theta-transformed upper bound. For example, in the case that a parameter is restricted to (0,1], the transformed upper bound will be log(1) [x] lowerboundtheta returns the theta-transformed lower bound. For example, in the case that a parameter is restricted to (0,1], the transformed lower bound will be -Infinity Derivatives Derivatives will be with respect to theta as described above. [ ] gradeta derivative of eta function. Using chain rule, this is applied to gradkappa to get the derivative with respect to theta. Not exported. [ ] gradkappa derivative of the scalar part of a Kernel. This must be defined for each kernel. It will be manual, so the derivative will be analytical or a hand coded numerical derivative. It will only be defined for parameters of the kernel. Not exported. Ex. dkappa(k, Val{:alpha}, z) [ ] gradkernel derivative of kernel. Second argument will be the variable the derivative is with respect to. A value type with the field name as a parameter will be used. Ex. dkernel(k, Val{:alpha}, x, y) [ ] gradkernelmatrix derivative matrix. Sounds great! How can I help? Can also you explain what is the relation between this enhancement and the derivatives branch? Hello! Very early on there was an attempt at adding derivatives - that's the derivatives branch. However, this added a great deal of complexity. I didn't feel like the base Kernel type and calculation method was carefully planned out before building all this complexity on top. For example, there wasn't really any consideration for the parameter constraints and how that would impact the optimization routines (this can be an issue with open intervals such as the alpha parameter in a Gaussian Kernel - not all kernels can use an unconstrained optimization method). I've since reworked much of the package and explored how other libraries approach derivatives. Rather than having the Kernel type be a collection of floats, I've now made it a collection of HyperParameter instances. This new HyperParameter type contains a pointer to a value that can be altered as well as an Interval type that can be used to transform the parameter to a domain more amenable to optimization and enforce constraints/invariants. I'm almost done the changes I've outlined in the "Optimization" section. Unfortunately I need to finish that first since the derivatives have a few dependencies on those changes. Once that is complete, it will just be a matter of defining analytic derivatives for the parameters and a kernel/kernel matrix derivative. I can provide some more direction as soon as that done if you'd like to help. It will be a couple more days though Excellent! I would like to help with defining the analytical derivatives. It seems that some of them have already been done in the derivatives branch. Should #2 be closed? The optimization section is basically complete save for a few tests - so it's good enough to start on the derivatives. I've updated the original comment for some detail. I've also expanded the documentation here: http://mlkernels.readthedocs.io/en/dev/interface.html The Hyper Parameters section may be helpful. If you'd like to add some derivative definitions and open a PR, feel free. You can probably grab a number of them from the derivatives branch (hopefully some reusable tests, too).
GITHUB_ARCHIVE
2.1.1. BIOS Versus Bootloader 2.1.1. BIOS Versus Bootloader When power is first applied to the desktop computer, a software program called the BIOS immediately takes control of the processor. (Historically, BIOS was an acronym meaning Basic Input/Output Software, but the acronym has taken on a meaning of its own because the functions it performs have become much more complex than the original implementations.) The BIOS might actually be stored in Flash memory (described shortly), to facilitate field upgrade of the BIOS program itself. The BIOS is a complex set of system-configuration software routines that have knowledge of the low-level details of the hardware architecture. Most of us are unaware of the extent of the BIOS and its functionality, but it is a critical piece of the desktop computer. The BIOS first gains control of the processor when power is applied. Its primary responsibility is to initialize the hardware, especially the memory subsystem, and load an operating system from the PC's hard drive. In a typical embedded system (assuming that it is not based on an industry-standard x86 PC hardware platform) a bootloader is the software program that performs these same functions. In your own custom embedded system, part of your development plan must include the development of a bootloader specific to your board. Luckily, several good open source bootloaders are available that you can customize for your project. These are introduced in Chapter 7, "Bootloaders." Some of the more important tasks that your bootloader performs on power-up are as follows: • Initializes critical hardware components, such as the SDRAM controller, I/O controllers, and graphics controllers • Initializes system memory in preparation for passing control to the operating system • Allocates system resources such as memory and interrupt circuits to peripheral controllers, as necessary • Provides a mechanism for locating and loading your operating system image • Loads and passes control to the operating system, passing any required startup information that might be required, such as total memory size clock rates, serial port speeds and other low-level hardware specific configuration data This is a very simplified summary of the tasks that a typical embedded-system bootloader performs. The important point to remember is this: If your embedded system will be based on a custom-designed platform, these bootloader functions must be supplied by you, the system designer. If your embedded system is based on a commercial off-the-shelf (COTS) platform such as an ATCA chassis, typically the bootloader (and often the Linux kernel) is included on the board. Chapter 7 discusses bootloaders in detail. - Private versus Shared Assemblies - 10.2.1 Custom Versus .emacs - Глава 6 BIOS – базовая система ввода-вывода - 12.8. Сервисы и прерывания BIOS - Как работает BIOS - Описание программы настройки BIOS - Настройка основных параметров BIOS - Настройка дополнительных параметров BIOS - Настройки чипсета в BIOS - Звуковые сигналы BIOS - Перепрошивка BIOS
OPCFW_CODE
It could well be my fault by being a bit lazy and not really taking the time to understand the way it works. It just seemed easier to use Eclipse and built exactly what I wanted in Ant. To cut a long story even remotely short, I just downloaded the new JDeveloper 10.1.3.1 distribution to verify a bug someone had pointed me at relating to defining MBeans in the orion-application.xml file. As I started to use it, I noticed that it had improved quite a bit in the area of packaging EAR files. Certainly in the way I'd wanted to use it in the past. One very nice thing is that it's now possible to specify in an EAR deployment-profile where a JAR file should be located within the EAR file. This is particularly helpful with the new JEE5 /lib classloading feature we've added. For example, lets say you are building an application that consists of a Web application and an MBean. Typically you'd build the MBean as one project, and the Web module as another project. Then assemble them into an EAR file using the EAR file deployment descriptor. You may choose to create a specific project just to contain the EAR/application level artifacts such as application.xml -- there's a nice JDeveloper feature (that I think is new) here too where the entries in application.xml are automatically provided based on the J2EE modules selected to be in the application using the Application Assembly dialog page. The example on the left shows an application called "example" that contains three projects: mbean, webapp and application. The mbean and webapp projects contains the code and resources that relate to their needs. They both contain their own deployment profile to package the project into its required archive form. The application project is used to assemble the application from the separate projects using an EAR deployment profile. The EAR deployment profile lets the individual projects within the application to be assembled into the EAR file. The nice discovery I found in the 10.1.3.1. release is that the target directory within the EAR file can be specified now for each project. Why is this exciting. Does Buttso have no life at all? Well what I liked about it, is that you can use this new facility to make use of the JEE5 library-directory classloading addition. By specifying that the mbeans-library jar file should be placed within a /lib directory in the EAR file, the MBeans classes are automatically made avaialble to the other modules within the EAR file. In reality to do this, you must also add an explicit J2EE deployment descriptor application.xml file inside of your EAR file instead of relying on the implicit application.xml file JDeveloper inserts into the EAR file. Add a J2EE deployment descriptor application.xml file to the application project and either set the version to be "5" or if you leave it as a 1.4 version, insert the It's only a little feature for sure, but its certainly something that makes me more productive in JDeveloper than before. I'm hoping to discover a few other little things like this as I start to use it a little more often now.
OPCFW_CODE
A bearded dragon is an excellent pet for any family. The lovable reptile is adorable and cuddly, makes a wonderful addition to homeschooling, and is fun to watch, as they have interesting and amusing behaviors. But before you bring home this lovable creature, make sure you know how to give it the optimal environment. If a 10-gallon aquarium seems a bit small, don’t worry! Thanks to this handy guide, you’ll get to learn the proper bearded dragon tank setup. Read on to learn how to keep your dragon healthy and happy. Tank Enclosure and Substrate Choose a spacious bearded dragon enclosure. A 40-gallon tank is a minimum size for a young dragon, but adults may require larger enclosures (75 gallons or more). Ensure the enclosure is well-ventilated with a secure lid to prevent escapes. Use a suitable substrate for the enclosure’s flooring. Options include reptile carpets, ceramic tiles, or non-toxic, fine-grain sand. Avoid loose substrates like loose sand or wood chips to prevent ingestion, which can lead to impaction. Lighting and Heating Bearded dragons require access to UVB lighting and a basking area for thermoregulation. Provide a UVB light source (e.g., a UVB-producing fluorescent tube) and a basking spot with a temperature between 95-110°F (35-43°C). Use a heat lamp or ceramic heat emitter to achieve this temperature. Provide a cooler side of the enclosure for the dragon to regulate its body temperature. Diet and Water A proper diet and ample supply of water are crucial components to creating a comfortable and healthy living environment for bearded dragons. As omnivores, these reptiles require a balanced diet that includes a mix of insects, greens, and vegetables. It is important to provide a variety of food to ensure proper nutrition and prevent boredom. In terms of water, a shallow dish should be readily available in the tank at all times. Water can also be provided through regular misting of the tank and food items. It is essential to regularly clean and replace the water to prevent bacteria growth. Enrichment and Decor Create a stimulating environment with hiding spots, branches, rocks, and other climbing opportunities. Bearded dragons enjoy basking, so provide a suitable basking perch. Use non-toxic decorations and avoid any sharp or abrasive objects that could harm the dragon. Regularly rearrange or add new items for mental stimulation. Additionally, having a suitable basking perch is crucial since bearded dragons love to bask. Bearded dragons are susceptible to respiratory issues when exposed to consistently high humidity levels. Excessive moisture in the air can lead to respiratory infections, which can be serious and even life-threatening for these reptiles. High humidity can negatively affect a bearded dragon’s digestion. In excessively damp conditions, they may be more prone to gastrointestinal problems, such as diarrhea, which can lead to dehydration and nutritional imbalances. To help protect your pet, an exo terra combometer should come in handy to help you monitor humidity levels. Follow This Basic Bearded Dragon Tank Setup Creating the perfect bearded dragon tank setup is essential for the health and well-being of your reptile companion. With the help of this comprehensive guide, you now have the knowledge and resources to create a comfortable and safe environment for your bearded dragon. Don’t wait any longer; start setting up their tank today and see the positive impact it has on your pet. Remember, a happy bearded dragon equals a happy owner. Start creating the ultimate tank now! For more articles besides learning about tanks for bearded dragons, visit our blog.
OPCFW_CODE
Autoplay resumes when user changes tab Big fan of Embla here 👋 Thanks so much for your fantastic work on this framework thus far, I love working with it! Bug is related to embla-carousel-autoplay Embla Carousel version 7.1.0 Describe the bug On the click of a button I am stopping autoplay with emblaApi.plugins().autoplay.stop(). Autoplay stops but only while the user remains within the current tab. When swapping tab then returning to the one with the previously stopped carousel, autoplay resumes. CodeSandbox https://codesandbox.io/s/tyh5r4 Steps to reproduce Change slide by clicking any of the buttons, ie. next/prev or dots Open a new browser tab or switch to one already open Return to tab with carousel, autoplay will restart Expected behaviour Autoplay to remain stopped when the user changes then returns to the tab. Additional context I don't believe this to be the intended functionality as when changing slide by clicking/dragging autoplay stops as expected and does not restart after changing tab. Hi @matthewdixon, I will try to reproduce this when I get the chance. Thank you for a complete bug report with a CodeSandbox. This is as you already mentioned, not expected behavior. Version 8.0.0 is just around the corner so I won’t be doing any bug fixes for v7 anymore. The bug fix will be released with v8. I hope you don’t have any objections migrating to v8. I’ll let you know when I’ve investigated this further. Best, David Hi @davidjerleke, apologies for the severely delayed response!! Great to hear you've rolled out a fix for v8, I've just had a play and can confirm this now works as expected 🎉 I had a look at the latest RC when you first replied but noticed the carousel wasn't as responsive so decided to come back to it when v8 had been released. Now it has and I've tested it, I still find it's not as responsive as v7 with what appears to be a slight lag between click/drag interactions and carousel movement. I love how snappy it was previously so this is a shame 😕 Just wondering if this was an intentional change? I still find it's not as responsive as v7 with what appears to be a slight lag between click/drag interactions and carousel movement. I love how snappy it was previously so this is a shame 😕 Just wondering if this was an intentional change? Hi @matthewdixon, I don't understand what you mean? I don't experience any lag at all. Embla v8 input is always responsive for me. For me, clicks happen instantly and drag interactions start instantly on pointer down and as soon as I start dragging the carousel reacts to it, even if it was in motion before the pointer down. But maybe I'm misunderstanding what you mean. Hi @davidjerleke, apologies for another outrageously late reply 😅 No you've understood correctly: it was only minimal but I definitely experienced a delay between interaction and response (I remember it being particularly noticeable on drag). I should have a chance to come back to this next week so I'll see if I can demo it to you somehow, might have to try capturing my screen if you're not seeing it your end as that suggests hardware might be a factor. I'll report back!
GITHUB_ARCHIVE
times to complete it. As you know Android is quite intense and sophisticated since significant no of concepts in it. I discovered myself in despair and had a imagined like ‘I might flunk in my Last For each specific weak point entry, additional data is delivered. The first audience is intended to become program programmers and designers. Trustworthiness is An important worry of quite a bit of scholars that are seeking a composing supplier. There exists an excessive amount fraud and weak quality operate in the sector. To circumvent any concerns in your component, we consider actions to make certain dependability in a couple of Instructions. This part delivers facts for every specific CWE entry, as well as hyperlinks to additional information. Begin to see the Group of the very best 25 area for an evidence of the assorted fields. All Assignment Gurus is a number one supplier of Expert tutorial help and composing services. We offer help on all topics and through the tutorial levels. Our crew of skilled specialists and 24×seven customer assist presents unmatched services to The scholars. Information particularly, see how the case use string constants. But for those who call a method that utilizes an enum which has a String argument, you still must use an explicit as coercion: On this lesson, well take a split from our intensive theoretical evaluate circuits and may change to some realistic considerationsspecifically, some Fundamentals of constructing and screening electronic circuits. thirty Complete Factors ACM supplies the computing industry's premier Electronic Library and serves its members and also the computing career with foremost-edge publications, conferences, and job assets. Manuscripts whose effects are productively replicated receive a Exclusive RCR designation upon browse around this web-site their publication. T is really an array in addition to a is undoubtedly an array as well as the component form of A is assignable for the part sort of T R Programming Homework Help addresses all homework and system perform queries in R Programming. Our tutors are remarkably effective in instructing the use and application of R Programming procedures and concepts on sturdy online platform. Learners can news master to have the most effective advantage outside of Understanding R Programming for solving different managerial problems through several techniques. Our online R Programming homework help is great post to read really a one particular quit Alternative to obtain last minute help in tests, homework, quizzes and tests. As well as good quality information and timely shipping and delivery, we provide several benefits that you might not locate with other Java programming help provider. By way of example: . At compile time, we are able to’t make any promise about the type of a field. Any thread can access any industry at any time and amongst The instant a area is assigned a variable of some type in a way and check that the time is is utilised the road right after, One more thread might have changed the contents of the sector. Resolving the Python assignment or planning a Python project needs software of all these features. All of our Python tutors are well versed with Python functions and supply instantaneous Python help for graduate and postgraduate college students.
OPCFW_CODE
Testing Postgres Constraints with pgTAP Modern day applications need to persist their state with a database. If you haven't jumped on the nosql database train, you probably utilize a relational database such as Postgresql. Today we will be taking a look at the importance of database constraints and how to test them using pgTAP. Your database is the source of truth for all data at any point in time and with constraints you can set up guardrails to protect that data. Constraints assist with ensuring the state of your data never invalidates your business logic. Imagine you're developing booking software for a hotel and a requirement is to not allow a room to be double booked on the same day. You can use constraints to ensure such conflicts will not arise. Now that you understand why we need constraints, why should we test them? Because that's what developers do! We write unit tests if we want confidence in our code, therefore we should write tests if we want confidence in our schema. As our constraints become more complex, it becomes even more important to write tests for their behavior. This can be done with however that process is more involved and is likely to be slower than pgTAP. In this example we will be writing a constraint to prevent the double booking example discussed above. We will also be using Docker and Docker-Compose so you don't have to worry about installing some new software (unless you don't have docker yet of course). If you don't want to follow along, you can pull this Github repo and just follow the steps in the README. Create new project directory mkdir test_pgtap_constraints cd test_pgtap_constraints touch docker-compose.yml version: '3' services: db: image: postgres:11.4-alpine environment: - POSTGRES_USER=test_pg_tap - POSTGRES_PASSWORD=supersecret - PGPASSWORD=supersecret volumes: - ./pg-data:/var/lib/postgresql/data pgtap: image: hbpmip/pgtap:1.0.0-2 environment: - DATABASE=awesome_hotel_booking - USER=test_pg_tap - PASSWORD=supersecret depends_on: - db volumes: - ./pgtap:/test Here we are using two images: Start the postgres database server # test_pgtap_constraints/ docker-compose up -d db # test_pgtap_constraints/ docker-compose run db psql -h db -U test_pg_tap CREATE DATABASE awesome_hotel_booking; \c awesome_hotel_booking CREATE TABLE bookings ( id bigint NOT NULL, room_number bigint NOT NULL, date date NOT NULL, name character varying NOT NULL ); test_pgtap_constraints directory, create and enter a # test_pgtap_constraints/ mkdir pgtap cd pgtap BEGIN; SELECT plan(8); SELECT has_table('bookings'); SELECT col_not_null('bookings', 'id'); SELECT col_not_null('bookings', 'room_number'); SELECT col_not_null('bookings', 'date'); SELECT col_not_null('bookings', 'name'); PREPARE insert_310_july_4_booking AS INSERT INTO bookings ( id, room_number, date, name ) VALUES (1, 310, '2019-07-04', 'Kevin Hart'); SELECT lives_ok( 'insert_310_july_4_booking', 'can insert booking with all attributes' ); PREPARE insert_conflict_booking AS INSERT INTO bookings ( id, room_number, date, name ) VALUES (2, 310, '2019-07-04', 'Dave Chappelle'); SELECT throws_ilike( 'insert_conflict_booking', 'duplicate key value violates unique constraint%', 'do not allow two bookings for the same room on the same date' ); PREPARE insert_814_july_4_booking AS INSERT INTO bookings ( id, room_number, date, name ) VALUES (3, 814, '2019-07-04', 'Tina Fey'); SELECT lives_ok( 'insert_814_july_4_booking', 'can insert booking in another room on the same date' ); SELECT * FROM finish(); ROLLBACK; Let's review what we just wrote. The whole test plan is wrapped in a transaction so all of the inserts are rollbacked after the test plan finishes. SELECT plan(8) tells pgTAP that we're going to run 8 tests. It's how all pgTAP test files begin. SELECT has_table('bookings') ensures that our schema has a SELECT col_not_null('bookings', 'id') ensures that our bookings table has an id column that does not allow ensures that an error is thrown, when we execute our prepared statement. In this case we want postgres to throw a duplicate data error because of the conflict booking. This test case should fail since we have not created our SELECT * FROM finish() tells pgTAP that our tests have completed. This is so it can output more information about failures or alert of you discrepancies between the planned number of tests and the number actually run. Lets run our test bookings.sql through pgTAP and see the results. Since we have not created our constraint yet, we are expecting the throws_like test to # test_pgtap_constraints/ docker-compose run pgtap ## OUTPUT Running tests: /test/*.sql /test/bookings.sql .. 1/8 # Failed test 7: "do not allow two bookings for the same room on the same date" # no exception thrown # Looks like you failed 1 test of 8 /test/bookings.sql .. Failed 1/8 subtests Test Summary Report ------------------- /test/bookings.sql (Wstat: 0 Tests: 8 Failed: 1) Failed test: 7 Files=1, Tests=8, 0 wallclock secs ( 0.02 usr + 0.00 sys = 0.02 CPU) Result: FAIL The pgTAP output shows us that our double booking test is not passing. Let's add a unique constraint to get our test to pass. # test_pgtap_constraints/ docker-compose run db psql -h db -U test_pg_tap -d awesome_hotel_booking Add unique index CREATE UNIQUE INDEX bookings_room_date_uq ON public.bookings (date, room_number); Our constraint is actually a unique index that ensures there cannot be two booking records with the same date and room number. If this happens, postgres will throw the unique validation error # test_pgtap_constraints/ docker-compose run pg_tap ## OUTPUT Running tests: /test/*.sql /test/bookings.sql .. ok All tests successful. Files=1, Tests=8, 0 wallclock secs ( 0.02 usr + 0.00 sys = 0.02 CPU) Result: PASS Our tests are passing. Yay! Stop postgres database server # test_pgtap_constraints/ docker-compose down Constraints are a useful way to ensure the integrity of our data. Once implemented, we can test and validate the behaviour of those constraints with pgTAP.
OPCFW_CODE
Windows Service Monitor is one of the simplest IPHost Monitors. All it does is to check whether a given Windows service is run. Alert is raised if no defined service is currently on. Testing Windows service presence manually means running ‘net start’ command and parsing the results to detect whether the service in question is in the list; if not present, the service is stopped. Checking services might be useful to detect early possible system malfunction, out of resources conditions and so on. IPHost Network Monitor offers a simple means to run such tests easily. IPHost Network Monitor – 30 days trial period IPHost Network Monitor 5.4 build 14538 of April 21, 2023. File size: 111MB Creating Windows Service Monitor isn’t hard. Specify short service name (note: it can be seen in the properties of the service we plan to monitor, it’s different from the service’s display name). You can also specify required credentials (domain, user and password) if the default values (administrator’s credentials on local computer) will not do. Specify the polling interval and amount of service detection failure to be treated as an alert. Checking Windows service presence can help to detect problems resulting from important services stopped functioning. Any service running by default can cause system malfunctioning if stopped. E.g., if DNS client is stopped, domain names resolution stops, domain names won’t be recognized. W32Time service, if stopped, leads to time synchronization absence. It can result in miscellaneous problems if file timestamps are checked. You can create separate monitors for each of Active Directory domain services on an AD Domain Controller. Thus, monitoring most important services can alert system administrators in time. Windows Service Monitors can be used as dependency monitors for other monitor types. E.g., if MS SQL server service is stopped,the corresponding database monitor will go down; however Windows Service Monitor may execute quicker and prevent more resource-consuming checks returning the same result. Description of other features: |Monitoring Features||Here you can find the list of monitor types supported in IPHost Network Monitor and brief description of their parameters.| |Application Templates||Here you can find the list of application templates supported in IPHost Network Monitor and their short description.| |Network Discovery||Helps you to create a basis of your monitoring configuration and automates the task of detection network hosts and network services.| |Alerting Features||Here you can find the list of alert types (ways of reaction to the problems happened during monitoring) available in IPHost Network Monitor, and their brief description.| |Reporting Features||Here you can find the list of report types available in IPHost Network Monitor with brief descriptions.| |IPHost Network Monitor interfaces |Here you can find an overview of IPHost Network Monitor components, Windows and web interfaces.|
OPCFW_CODE
Creating an ssh key and adding it to our authenticator https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent To check if we have an ssh authentication already we run on the terminal: ssh -v firstname.lastname@example.org If we have the ssh authentication we will a message like this: Hi User Name! You've successfully authenticated, but GitHub does not provide shell access. If it is not succesful, follow to the next section. Generate the key We will write the following command by replacing the email within quotations with the email we used to sign up on github. From Carrie Kouadio Links from Dr. Tiffany Vora Workshop outline: https://drive.google.com/file/d/1gV9vEwR7UgWl0Yxi7NRc4JoSxFiJVuWl/view?usp=sharing Before You Start Writing: https://drive.google.com/file/d/1JPq35Y3mWeWsT0VM_YZAdJcJ-niPx9vI/view?usp=sharing Course: Data science research in biology Course learning outcomes: Experience the process of scientific discovery, including the iterative nature of biological research. This includes: discover and formulate their own scientific questions with clear, testable hypotheses. draw conclusions from analysis of novel data apply background knowledge to develop and interpret their research Engage in highly collaborative research teams to carry out a scientific research project Learn common scientific practices employed by biologists, including common tools, instruments, and software Communicate their research and discoveries at both technical and non-technical levels These are the CLOs for BIO 197: Culminating Experience CLO 1. Providing written content to outline, manuscript, figures, tables, or substantial edits to the manuscript. Authorship order - For middle authors that contributed equally, assign order by random number generator. Indicate contributor roles to the project using CRediT Taxonomy (Contributor Roles Taxonomy (https://casrai.org/credit/). Providing/confirming affiliation prior to manuscript submission within approx 2 weeks of being notified. Endorsing final content of manuscript within given approx 2 weeks of being notified. What is a phylogeny and why is it the hot stuff in Biology. Making a phylogeny. Getting genetic data. By yourself! From public databases, Genbank Getting a starting homology hypothesis. Construct your own! Get one from TreeBASE Getting a published genetic tree. Google scholar OpenTree Physcraper: Review of reproducibility concepts and advancement in biological science. Homework for sortee github hackathon September meeting. «Last month we discussed each finding and briefly summarizing 2 articles/white papers/books/blog posts on GitHub. If you are able to locate articles and summarize them, please add them to the list that starts on page 2 of this google doc: https://docs.google.com/document/d/1xQ8Ol5_a4di64zip228UtJHzneSSCyEvmQ1mP2JG47E/edit#" Documentation is a key aspect for replication and reproducibility: Today I’m trying to install Physcraper in a new computer, a 2015 MacBook Air. The goal is to check for installation issues The computer still has python 2 as default. How to make python 3 the default? We worked on a bunch of issues from documentation, on a separate branch. At the same time, EJM fixed errors on main branch and to merg main into our working branch I did It’s 8pm in Merced, California, beginning of fall and we’re 35C. I’m working on job applications and I’m trying to render a document written in R markdown to word and pdf documents. The method I always use is suddenly failing while trying to render my Rmd file to pdf. When rendering via RStudio, pressing the render button from the research2pdf.Rmd file, the process just gets stuck, so I switch to a terminal to run R from the command line. In this paper we present Physcraper, a Python package for update of phylogenetic relationships using existing expert-curated alignments and molecular data from GenBank that has not been analyzed in an evolutionary context yet.
OPCFW_CODE
''' This file is part of the zone_plate_testing repository and an exntension of the recipie to simulate one tilted zone plate fonund in the notebook simlate_zp_with_tilt.ipynb to a workflow which does the aforementioned task multiple times. ''' import numpy as np import os,pickle,time import matplotlib.pyplot as plt from skimage import io from skimage import img_as_float from multislice import prop,prop_utils from os.path import dirname as up ''' make_zp_from_rings : make a zone plate from the rings which were created earlier. Inputs : n - number of rings, grid_size Outputs : a numpy array containing the zone plate ''' def make_zp_from_rings(n,grid_size): zp = np.zeros((grid_size,grid_size)) for i in range(n): if i%2 == 1 : locs_ = np.load('ring_locs_'+str(i)+'.npy') locs_ = tuple((locs_[0],locs_[1])) vals_ = np.load('ring_vals_'+str(i)+'.npy') zp[locs_] = vals_ return zp ''' tilt : get the focal spot for a given zone plate and tilt angle of the input wave and save it to a tiff file Inputs : i-tilt angle in degrees,zp - zone plate pattern, thickness (of the zone plate),parameters, out_dir - output directory Outputs : dictionary containing : tilt angle, max_loc - location of the zone plate, L2 - side length at output plane ''' def tilt(i,zp,thickness,parameters,out_dir): zp_thickness = thickness beta = parameters['beta'] delta = parameters['delta'] zp_coords = parameters['zp_coords'] step_xy = parameters['step_xy'] energy = parameters['energy(in eV)'] wavel = parameters['wavelength in m'] f = parameters['focal_length'] L = step_xy*np.shape(zp)[0] n = np.shape(zp)[0] os.chdir(out_dir) t1 = time.time() print('claclulating for tilt angle : ',i) theta = (i)*(np.pi/180) slope = np.tan(theta) x = np.linspace(zp_coords[0],zp_coords[1],n) X,Y = np.meshgrid(x,x) z1 = 2*np.pi*(1/wavel)*slope*X wave_in = np.multiply(np.ones((n,n),dtype='complex64'),np.exp(1j*(z1))) print('step_xy :',step_xy,'wavelength :',wavel,'zp_thickness :',zp_thickness) number_of_steps_zp = prop_utils.number_of_steps(step_xy,wavel,zp_thickness)*2 wave_focus,L2 = prop_utils.optic_illumination(wave_in,zp,delta,beta,zp_thickness,step_xy,wavel,number_of_steps_zp,0,f) focal_spot,x_,y_,max_val = prop_utils.get_focal_spot(np.abs(wave_focus),grid_size) io.imsave('tilt_'+str(i)+'.tiff',img_as_float(focal_spot)) t2 = time.time() print('tilt image number :',i,'time taken :',(t2-t1)) return {'tilt_angle':i,'max_loc':np.array([x_,y_]),'L2':L2} os.chdir(os.getcwd()+str('/rings')) parameters = pickle.load(open('parameters.pickle','rb')) grid_size = parameters['grid_size'] ''' Creating the set of parameters for the simulation data set. Making a common output directory for the results. ''' num_zones = np.array([250]) thickness = np.array([2e-6]) inputs = np.linspace(0,1,50) output_dir = up(os.getcwd())+str('/output/') os.mkdir(output_dir) ''' Nested for loop to run the simluation over the variable number of zones, thicknesses and tilt angles. The output for each set of parameters is saved into a different folder. ''' for var1 in range(len(num_zones)): n = num_zones[var1] zp = make_zp_from_rings(n,grid_size) for var2 in range(len(thickness)) : print(num_zones[var1],' zones, ',thickness[var2] * 1e6,' microns thick') max_loc = [] t = thickness[var2] out_dir = up(os.getcwd())+str('/output/')+ str('zones_'+str(n)+'_thickness_'+str(t)) os.mkdir(out_dir) print(n,t,grid_size) for angle in inputs: max_loc.append(tilt(angle,zp,t,parameters,out_dir)) np.save('max_loc.npy', max_loc)
STACK_EDU
#!/usr/bin/env python # # Copyright (c) 2013-2016, ETH Zurich. # All rights reserved. # # This file is distributed under the terms in the attached LICENSE file. # If you do not find this file, copies can be found by writing to: # ETH Zurich D-INFK, Universitaetstr. 6, CH-8092 Zurich. Attn: Systems Group. # Import pygraph from pygraph.classes.digraph import digraph from minmax_degree import minimal_spanning_tree # own minimum spanning tree implementation import algorithms import overlay class BadTree(overlay.Overlay): """ Build a bad tree. We use this to show that picking the "right" topology matters. The idea is to invert the weights and run an MST. The spanning tree will then be composed of many expensive links (i.e. cross-NUMA links). """ def __init__(self, mod): """ Initialize the clustering algorithm """ super(BadTree, self).__init__(mod) def get_name(self): return "badtree" def _build_tree(self, g): """ We build a bad tree by inverting all edge weights and running an MST algorithm on the resulting graph. """ # Build graph with inverted edges g_inv = algorithms.invert_weights(g) # Run binary tree algorithm mst_inv = minimal_spanning_tree(g_inv, self.get_root_node()) # Build a new graph badtree = digraph() # Add nodes for n in g.nodes(): badtree.add_node(n) # Add edges, copy weights from non-inverted graph for (e,s) in mst_inv.items(): if s != None: badtree.add_edge((s, e), g.edge_weight((s, e))) return badtree
STACK_EDU
Earlier this evening I got a text message from a colleague saying he was having trouble getting his new corporate laptop working on his home WiFi network. This colleague works in sales at VMware, hence I reasoned it would probably be something basic that would only take us 5 minutes to troubleshoot, so I gave him a call. 😉 I suspect many of you reading this know how these things go; you start off ascertaining what OS you’re dealing with, move on to getting them to click various components of the GUI, before progressing to some command-line stuff. In this case, the WiFi connection was sound; the adapter was receiving a DHCP lease, and we could ping IPs on the Internet. However, we couldn’t ping the router, or get the router to respond to DNS queries. I was moments away from manually configuring a known-good external DNS server when I thought we should venture into a quick look at the routing table. As tough as it might be over the phone, I got my colleague to read out the entries from his “route print” output. This revealed a few entries I didn’t like the sound of – several for 192.168.1.0/24 (the default internal subnet used by his router). So, I got him to read out his full ipconfig output – suspecting something might be up. Sure enough, it turned out VMware Workstation had randomly chosen 192.168.1.0/24 as the subnet to use for the VMnet8 (NAT) network. I assume that when the machine was first built it was on a corporate 10.x.x.x network. So when Workstation randomly chose a couple of /24s from the 192.168 range to use for VMnet1 and VMnet8, 192.168.1.0/24 was available – but choosing this means clashing with perhaps the most commonly used private IP range in the world. For those not familiar with VMware Workstation, it supports up to 8 virtual networks and provides a DHCP service which can run on these networks to allocate IPs to VMs connected to them. At install (or first run) it randomly selects two /24 ranges from 192.168.0.0/16 (e.g. 192.168.114.0/24) for use on these virtual networks. A quick visit to the Virtual Network Editor (Start / All Programs / VMware / Virtual Network Editor) allowed my colleague to change the subnet used for VMnet8 to another /24 range – and the problem disappeared. Certainly one to chalk up to experience! In my view, Workstation should avoid using this specific subnet out of the box – I’m sure it must catch out a fair few users (who seemingly have a 2 in 254 chance of having either VMnet1 or VMnet8 picking this range). Please let me know (leave a comment) if you’ve also suffered from this issue.
OPCFW_CODE
Ok, so I mentioned in another question an idea for a character known for "(eating) small animals live and fresh with all the dignity and etiquette of a nobleman eating a sandwich"; would a character need any certain feats, stats, or skills to be able to do that? I know there were feats referenced in that article that would be useful for that, but they were more for being able to eat things regardless of decay (Scavenging Gullet) or just being able to bite into them better (Deformity(teeth)). Are there any other such feats, or do you even need a feat to eat meat raw like that? No, a character does not need a feat to eat meat raw. There's no rule for that in D&D, and if there were, it would be laughable if anyone didn't disregard it. "Good thing I'm third level, I can go to sushi bars now!" You mentioned in the other question that you're not even necessarily playing a human character, but some Unseelie Fey right? Different people and cultures and races eat different things, and even the ridiculously legalistic world of 3.5 hasn't seen fit to mechanize that yet, thank goodness. Various races in the Monster Manuals are described as eating all kinds of things ("prefer human flesh...") but there's nothing in their stat blocks about it. Then, see @Discord's answer for how this should be treated as a "non-RAW" activity. I think a certain amount of gastronomic flexibility is expected of an adventurer. Dragon vol. 328 lists Fussy as a potential character flaw, wherein “you are uncomfortable ingesting anything but a small range of preferred foods and drinks.” (The mechanics don’t back this up all that well, though, since all it is is a chance to become Sickened when you drink a potion and a penalty to saves against ingested poisons.) Furthermore, “getting along in the wild” is a DC 10 Survival check. That is, the average person could do it without training; in the modern world that’s not really accurate (most of us city-slickers, at least, would probably eat a poisonous mushroom inside a week), but within Dungeons & Dragons it seems to be. So again, the rules seem to assume that adventurers are going to be flexible about eating what they find out there. There are ways to benefit especially from certain kinds of gruesome meals – particularly cannibalism – but those have more to do with being Evil and demonstrating it through food practices than it does with strange dietary choices. Most are magical benefits, taking that kind of thing to a ritualistic level. For just eating, it seems to be a thing unmentioned by the rules. Adventurers do seem to be expected to be willing to eat just about anything edible, though most would probably kill the rat first. Something like this seems to be purely a matter of how you describe your character. As far as I know, there are no feats that govern being able to eat raw/live animals. It's really a matter of whether it's flavor for your character, (pun intended) or if you actually want game mechanics for it. Different cultures on Earth eat a lot of different things that would make other cultures sick and that would definitely be expanded in a fantasy world. For really small animals (mice, insects, shrews) I would rule that a feat or roll wouldn't be required if the character has acclimated to eating these things. I would consider things like: the character's cultural/racial background (everyone of the particular race or city eats live mice), back story (the character lived as a feral child with no access to fire) or past practice. (Maybe the character has spent several months 'training' do this.) If I character doesn't have practice eating these sorts of things, I'd go the Fear Factor route and have them make a Constitution check or Fortitude save to avoid becoming nauseated/sick. RAW: First, picking up the live lizard to eat it requires a grapple attempt. Since the lizard has a grapple bonus of -12 due to its Tiny size and Strength score of 3, it's quite difficult for it to escape your grasp. Since humans lack the Swallow Whole ability, you technically need to kill it first. This involves making a bite attack, which a human can perform as an unarmed attack. Unless he is a monk or has the Improved Unarmed Strike feat, he takes a -4 attack penalty in order to deal lethal damage. A lizard only has 2 hit points so you are likely to succeed. Once the lizard is officially killed by your bite damage, it becomes just meat, and eating food is covered under mundane actions that don't require skill checks. Well, the one rules that refers to this is the Ex ability Swallow Whole. It explicitly allows you to swallow a live creature one size smaller than you, and it details the grapple checks required to do this. Saying (as a house-rule) that all creatures with mouths have Swallow Whole for creatures 3 sizes smaller than them doesn't seems too crazy. And it gives the creature a chance to escape within the rules. Keep in mind that if you are of the Dragon type, including half dragons and dragon kin type creatures, you have a Draconis Fundamentum which is an organ that's a part of your digestive system and allows you to process pretty much anything. You can find out about this in the Draconomicon. If your DM is lenient enough, you could swing that, due to a distant ancestor being a dragon, you've developed at least a partial Draconis Fundamentum which allows you to process uncooked animals or what have you. Kobolds, from what I've been told, also can process nearly any ingested material and, genetically, they're really far removed from the Dragons they revere.
OPCFW_CODE
When to change the Wiper blades? My car windshield wiper has started making noise every time I turn them on. I have tried to clean my window with Rain-x. After cleaning, it doesn't make any sound but a day or two later it start making sound again. Is this the time I should change my wiper blade? It's a used car so I have no idea how old are they. How to verify if I really need a new wiper blade? Perhaps there is wax on the windscreen, which can come from carwash brushes even if you didn't select a "wax" program. That horrible stuff - completely unnecessary with modern paint finishes - makes the wipers graunch and the window dangerously opaque when it is smeared by the wipers in the rain. IMO it can only be removed with T-cut. Similarly, another ill-advised practice of the motor trade is to spray the inside with furniture polish, which not only makes everything slimy, but makes the doors squeak if it gets on the rubber edge seals. Replace every 6-12 months. The big question you need to ask yourself when considering the wipers is ... Do they work? Are they cleaning the windshield without streaking? Are they removing the water as expected? If not, replace them. If the rubber of the wiper is splitting or tearing off the frame, replace them. IOW, if you're not happy with them, replace them. There is no set "this is the time". As far as noise goes, when you clean your windshield, clean your wipers at the same time. This will help them continue to work. The easiest way I have found to do this is at the gas station, if you clean your windshield with what is provided there, turn it so the washer side (sponge) is up, placing it under the wiper and wipe it the entire length. You'll usually see a swipe of black on the squeegee sponge. Once cleaned, you'll find they will provide a much improved swipe. There are other ways to clean it as well, like cutting an apple in half and swiping it along the wiper blade to clean it. Some have even suggested a potato. It will take the "junk" off of the blade and allow it to work again. I cleaned the wipre blade everytime I clean window. And yeah.. it's cleaning snow and rain without much issue. it's just this terrible noice.. :) is there a way I can fix this ? The potato is best used to wipe the screen - the starch allows the rain to "roll" off the screen as droplets when driving along. @GaurangShah - Then you aren't completely cleaning it or you need to replace them.
STACK_EXCHANGE
Dead-lock on 0.3.26 on Windows in OpenBLAS multi-threading nested within OpenMP multi-threading Snippet that reproduces the issue: #include<stdio.h> #include<stdlib.h> #include "cblas.h" int main(){ int n = 14000; int incx = 1; double *x, ddt; x = (double*)malloc(n*sizeof(double)); for (int i=0; i<n; i++){ x[i] = 1.0; } #pragma omp parallel { #pragma omp parallel for for (int i=0; i<10; i++){ ddt = cblas_ddot(n, x, incx, x, incx); } } printf("dot(x, x) = %f", ddt); free(x); } The dead-lock is due to: https://github.com/OpenMathLib/OpenBLAS/pull/4359: the dead-lock does happen on https://github.com/OpenMathLib/OpenBLAS/commit/e60fb0f39731ae9c21c5fd74d432f03b83d2a7d5 (merge commit for https://github.com/OpenMathLib/OpenBLAS/pull/4359) there is not dead-lock in the previous merge commit in the develop branch https://github.com/OpenMathLib/OpenBLAS/commit/5b09833b1c877ad1d395ee38791521ccd32386be Context: discussion and debugging in https://github.com/scipy/scipy/issues/20294 originally report in https://github.com/scikit-learn/scikit-learn/issues/28625 Thanks for all the work on getting the reproducer - holding off on a revert though in case a better fix is found quickly enough @martin-frbg @lesteve I may have found the issue and am working on a fix. Interestingly, in reviewing the Linux code l found the following code at blas_server.c:896 within exec_blas(): #ifdef __ELF__ if (omp_in_parallel && (num > 1)) { if (omp_in_parallel() > 0) { fprintf(stderr, "OpenBLAS Warning : Detect OpenMP Loop and this application may hang. " "Please rebuild the library with USE_OPENMP=1 option.\n"); } } #endif Which strongly suggests that even the Linux version does not expect or support the scenario this bug represents. That code traces back 14 years to GotoBLAS2. Maybe this is legacy code that is no longer relevant? If it is still relevant, then a separate fix may also be required for the Linux thread server if this is intended to be safely supported. If there was a problem on linux with the code pattern used in the issue, I would have expected this warning or a crash to have shown up in testing. I wonder why the warning is not hit as expected, maybe there is something else going on. Agreed. I am guessing something else is going on. For example, is ELF not defined for the build? Also, I notice the missing parens around omp_in_parallel in the first if statement. That function is an OMP function - if you aren't building/linking with OMP you should have an undefined symbol. So, is that symbol stubbed in as a nullptr for non-OMP builds? I don't have time to investigate the Linux side until I fix the Windows side. PR #4587 sent. Looks like we're almost there: @mseminatore's PR was merged A new openblas-libs build was done: https://github.com/MacPython/openblas-libs/pull/149 Incorporating that new build for SciPy 1.13.0 is in progress: https://github.com/scipy/scipy/pull/20362 It seems this issue can be closed when it's confirmed that the fix works for the original scikit-learn issue. There's also the side issue whether to add the reproducer to the "utest" set, which is why I haven't closed the issue here yet. Plan to sort this tonight or tomorrow with the impending release It looks like several of the upstream bugs have now been closed. Is this confirmation that the fix has addressed the issue? @martin-frbg let me know if I can help with the utest. That's right, we somewhat-urgently needed to release new SciPy to support NumPy 2.0.0, so the closing of the issue basically represents that we pulled in your patch and we're hoping for the best, and I'm not sure I've seen something more robust testing-wise yet, but I imagine we'll get some noise if the problem still persists when 1.13.0 is out soon. I tested quickly with scipy 1.13 on Windows, which has the OpenBLAS fix from https://github.com/OpenMathLib/OpenBLAS/pull/4587. I can confirm the original scikit-learn does not happen anymore, so thanks everyone for this!
GITHUB_ARCHIVE
Freight and logistics are the lifeblood of our economy. Axle is a financial services company that keeps the supply chain moving by providing freight intermediaries a software-based solution that automates operational overhead & time-consuming back-office processes, so they can maintain better control of their cash flow. We’re processing millions of dollars in payments each week and growing 30% month over month. To sustain this growth, we’re looking for exceptional talent to join our team. We see Axle as a constant work-in-progress, and the same is true of our people; for all of us, we believe the best is yet to come. Our core values are tenacity, curiosity, empathy and transparency, which we pursue through thoughtful discussion and knowledge-sharing among a diverse set of peers and colleagues. We want to work in the company of warm, inclusive people who treat their colleagues exceptionally well and have a team-first mentality. The kind of people who are committed to going out of their way to help others in the short-term and pushing them to improve over the long-term. We are intentionally a fully-remote team and have team members across all time zones. Axle is a consciously diverse workplace - across the organization and within each team. We are committed to equitable hiring, training, and advancement. We encourage candidates from underrepresented groups to apply. We’re looking for a smart and ambitious Sales Development Representative to create a larger top of funnel for new business opportunities, better qualify leads for Business Development Officers to close, and alleviate the prospecting and majority of cold calling for BDOs. As an early member of the team, you will: - Respond, engage and qualify inbound leads and inquiries - Hold intelligent and engaging conversations over the phone and email - Generate qualified leads through outbound prospecting to freight brokerages through cold calling, direct email and email sequencing. - Follow up with contacts that opened or clicked on Axle’s marketing emails - Detailed descriptions and documentation of contacts and accounts within CRM - Collaborate across organization (internal sales team, product, and operations) to help close deals and provide insight on feedback from clients - A/B testing on different outreach campaigns to use data for sales enablement - Schedule appointments and demos for assigned BDO team - 2+ years of previous SDR experience or outbound sales (cold call) experience Nice to Have - Experience in the logistics industry - Experience working at a fintech company or start-up - Generous Option Grant - Unlimited PTO - Fully Remote - Quarterly Offsites (when it’s safe to travel again) - Home Office Build-Out Allowance - Professional Development Allowance - Healthcare Reimbursement for Premiums and Medical/Dental/Vision Expenses - Cell Phone Plan Reimbursement - Home Internet Reimbursement - Wellness/Gym Reimbursement - 401K Program - TechCrunch: Series A Raise - FreightWaves Industry Article: Startup aims to democratize freight broker financing - Crunchbase Funding Announcement: Axle Drives Freight Broker Financing Platform Forward With $27.7M - Why We Invested: Axle - Customer Reviews: Reviews.io - Customer Reviews: Trustpilot.com - Quickbooks Integration Announcement
OPCFW_CODE
Release 7.2 – Navigating the User Interface (Part 2) In the first part of my blog we looked at assigning privileges to an identity in the IDM system. In this part we will look at how to check the status of this assignment. Once again we choose the identity (search person type) and call the Display Identity task. Once called you can find the Assigned Privileges and Assigned Roles tab where you can check the assignments and their current status. Clicking on the assigned privileges tab I can see all the current assignements with status ok. This means the privileges are assigned successfully. However searching I do not see the privilege that I had previously assigned PRIV:ROLE:EDM:SAP_TREX_ADM. In this case you can click the advanced link where additional search possibilities are presented. If the assignment is still pending then you need to search with the radio checkbox Any selected. The privilege is now displayed and its current status as pending. This means that the assignment is still being processed in the IDM system. If there is a problem then the status will change to failed. In the below example the assignment of another ABAP role has failed. Find the cause for this and fix it. Then navigate back to the Assign Privileges, Roles and Groups task and perform an advanced search for privileges that are not yet assigned. Click on the failed link which will in the next popup screen allow you to press the edit/retry button. The status of the assignment will then change to retry and the provisioning request resubmitted. Hopefully this blog has been useful if you are for example migrating from release 7.1 and are unfamiliar with some of the new features of release 7.2. Great tip, this is a massive plus from 7.1. I am facing a similar problem. I did role assignment. After the approver approves the assignment, the status is displayed as failed for this assignment. Can you please suggest where should I have to check to identity the cause for this failure. check the privs assigned to the role and see if any of these failed. If they did then check the connector hook task for the priv and see if there are any errors in the job log (or check the overall job log for errors). Thanks for the helpful information. We are using IDM 7.2 SP8, Netweaver 7.3. However the UI screens is different for us. I do not have the option to check direct/indirect assignments or check the staus of assignment and retry. In which version we can find these options and additional feature? did you deploy the SP8 UI on the NW 7.3 server ? You can check this by the version of the IDMIC component which you can check in the NWA. Thanks for your reply Chris. We are good now after we have 7.2 SP8 version on the AS Java stack.
OPCFW_CODE
17452 Posts in 4473 Topics by 1971 members |Go to End| 10 July 2008 at 8:17pm I know this "blank page" is already written here, but no post helps me to solve my problem. I want you SilverStripe on the Live Server (www.webdesign-solutions.ch). I cant understand the blank page. I have downloaded in the download site SilverStripe. On my local server it works without problems. So I cant understand why it not works on the Live Server. The other posts about this "blank page" was not realy a help for me. Thanks for your helps. 10 July 2008 at 10:20pm Yep blank pages are never useful but error logs are! See if you can get access to your php_error.log file - this should hopefully have the PHP error thats causing it. When Ive seen a blank page on the installer a couple things to check * Running PHP5 * At least 32MB of php memory, 64MB+ recommend. But the server error logs should provide a starting point 11 July 2008 at 8:37pm thank you for your anwser. I asked my hoster to give me the error_log file. In this file is this: [Fri Jul 11 10:03:15 2008] [error] [client 220.127.116.11] File does not exist: /*folder*/*folder*/*folder*/*htdocsFolder*/install.php2 [Fri Jul 11 10:03:17 2008] [error] [client 18.104.22.168] File does not exist: /*folder*/*folder*/*folder*/*htdocsFolder*/install.ph I dont understand why he says "install.php2" or "instll.ph"? In the "htdocsFolder" are all Files from SilverStripe. I dont have an File install.php2. I dont know why he searchs a files called install.php2 or install.ph. So I hope you can realy help me, its a important thing for me. 11 July 2008 at 8:59pm strange I dont have any idea why its doing install.php ether... Your host doesnt have multiple versions of PHP does it? Some hosts have multiple versions of PHP all running in the same environment so make sure your project is PHP5 - use phpinfo() to see what versions running. 11 July 2008 at 10:04pm Last edited: 11 July 2008 10:07pm Thank you for your help. The problem was the php version. I thouht always I have php5 but it was php4. Perheps we can make an funtion in the install.php that make a php version check and put then an message if its not php5. Thank you very much. 22 July 2008 at 9:44am Is php 5 mandatory? 22 July 2008 at 11:22am yes. SilverStripe will not run on PHP4. You must be running PHP5. PHP 5.2.0+ is recommended |Go to Top|
OPCFW_CODE
This question is about the Command Line Developer Tools that are usually installed with xcode-select --install and updated via a software update from the Mac App Store (at least until macOS 10.13). I use the developer toolchain on a daily basis and it has always worked and updated without issues. Today I've updated my mac from High Sierra 10.13.6 to Mojave 10.14.1, and I've lost the ability to update the Developer Tools. After the update I've executed a terminal command that relies on the developer tools being installed. It was a command to update Homebrew, although I don't think that the specifics matter, as I believe that any task that tried to access the developer tools would have triggered the same error message. The error was: xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun The phrasing was new (maybe), but I thought that it was caused by the usual need to re-install the Developer Tools after some macOS updates. Later I tried to dig a bit deeper into the failure, and found out that: $ ls -l /Library/Developer/CommandLineTools/usr/ drwxr-xr-x 3 root admin 96 4 Nov 19:32 share $ which xcrun $ xcode-select -p Still, as I normally do, I entered the command to install and update the Developer Tools in the terminal: This started the usual procedure: modal window to confirm, then request to accept the license agreement, followed by a progress bar. Except that it failed very quickly with an unexpected error: Can’t install the software because it is not currently available from the Software Update server. I've tried several times, to no avail. It gets always stuck. Sometimes, however, the failure causes the System Preferences to report a pending update: If I open that preference panel it starts searching for updates, and it always finds nothing except the first time that it happened. The first time, it found this: It literally suggested me to install the updates for macOS 10.11 and 10.13. I have no idea why. I closed the preference without installing, and as I said it hasn't shown them again. There is also no update available in new Mac App Store. Is this a known issue? Is there any way to resolve the problem? Of course I can download the Dev Tools for macOS 10.14 installer from https://developer.apple.com/download/more/. Before installing them manually, though, I'm wondering if there is anything that's broken with the system.
OPCFW_CODE
OverviewThe Synchronizer is an important operator in case you are using multiple data sources. Different data sources are not guaranteed to have equally distributed sample periods, and also they may use different burst modes. The Synchronizer takes care that the different sources are synchronous. Also it converts all input buses to the same sample frequency, which is the highest of the input buses. Operator portsInput Any1: Any sample type. The connection is not limited to one type of signal. Input Any2: Any sample type. The connection is not limited to one type of signal. Input Any3: Any sample type. The connection is not limited to one type of signal. Input Any4: Any sample type. The connection is not limited to one type of signal. Input Any5: Any sample type. The connection is not limited to one type of signal. Output Any1: Any sample type. The connection is not limited to one type of signal. Output Any2: Any sample type. The connection is not limited to one type of signal. Output Any3: Any sample type. The connection is not limited to one type of signal. Output Any4: Any sample type. The connection is not limited to one type of signal. Output Any5: Any sample type. The connection is not limited to one type of signal. PropertiesFind more information about changing properties here: "Properties Viewer" Captiontype: Word or phrase The name of the object in the project. This name must not contain '.', '$' nor '@' characters. For more information about the rules and usage of the Caption property, please refer to "Caption property - background and usage". Documentationtype: See description Optional documentation of this object. If this object is an operator, the Documentation text is displayed below the operator symbol. DetailsThe Synchronizer has 5 inputs and 5 outputs. These inputs and outputs are mirrored, that is input S1 connects to output S1, input S2 to output S2, etc. The channel information is copied unchanged to the output(s). Not all inputs have to be connected for the Synchronizer to work. You can choose to connect as many inputs as you need. Connected inputs are allowed to have different numbers of channels. Constant, control and messaging operators cannot be connected to the inputs. If such operators are connected, the Synchronizer will issue a warning and stop working. Note that constants do not need to be synchronized, because they do not have a special timing. See also "Timing properties of samples in a stream". It is allowed to connect non-uniform signals (signals that do not have a regular sample period interval), but only if at least one timed signal is also connected. The non-uniform signals will then be aligned with the timed signal, and the outputs will show the sample rate of the timed signal. For each timed sample, the last non-uniform sample is put through, so if the timed signal's sample rate is lower than the update rate of the non-uniform signal, you may loose non-uniform samples. All outputs get the same sample rate, which is the highest of all connected inputs. If one or more sources do not output samples, because they have not been started, the Synchronizer does not output samples either. The Synchronizer supports sample bursts of maximally 10 seconds. This means, that if a data source puts out a burst of samples, this burst should not contain more than 10 seconds worth of samples. For very low frequency signals (< 1 Hz) at least 10 samples are buffered. The synchronizer will cause a delay between input and output. The delay time is undefined, because it depends on the burst size of the input sources. For example, if a device outputs data in bursts of 0.5 seconds worth of samples, then the delay time will be just over 0.5 seconds. There will be no delay difference between each output. Feedback from a bus from one of the outputs of the Synchronizer is not allowed, but this may not be made clear by the designer. If you do make a feedback loop involving the Synchronizer operator, this may lead to undefined effects or bad performance. Data sources from hardware devices need to be set to Synchronized modeIf a signal at one of the inputs stems from a hardware device operator, then that device operator must be set to Synchronized mode. All hardware device operators that contain the Synchronized property, must have their Synchronized property set to True. Read more about synchronizing different devices here: "Synchronization of multiple data sources". If Synchronized is set to True on hardware device operators, this specifies that the data acquisition is System Synchonized, as opposed to Device Synchronized. To be able to combine signals from different hardware signal sources, all sources require to be System Synchronized. Signal sources such as from the Signal Generator ("Signal Generator") are controled by the system and are therefore automatically System Synchronized. If a hardware device operator is not set to Synchronized mode, the Synchronizer operator will show a buffer overflow after some undefined period of time. If this happens, the operator gives a warning below the symbol. Example: Synchronizer DemoDemonstrates various aspects and behavior of the Synchronizer operator.
OPCFW_CODE
AWS::EC2::SubnetRouteTableAssociation does not appear in Resource types pane of the CloudFormation Designer In CloudFormation it's possible to create a resource of type AWS::EC2::SubnetRouteTableAssociation As an example, here's a snippet where I've done exactly that in YAML. Resources: mySubnetRouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref paramSubnetId RouteTableId: !Ref paramRouteTableId I have to concede that in my CloudFormation journey I "got right into it" without really using the drop-and-drag aspect of the CloudFormation Designer (which the documentation calls the Resource types Pane). I now find myself in a position where I'm using the Resource types pane, and find theres no SubnetRouteTableAssociation resource. I have considered a couple of possibilities, but they all seem unlikely. It's somewhere else (not under EC2), and despite looking through umpteen times I can't find it. AWS overlooked adding it. It's been given a different name. Some other good reason that I can't comprehend just yet. It's expected to use another resource type that is "overloaded" to stand-in for this resource. So in summary, can anyone shine some light on why the resource AWS::EC2::SubnetRouteTableAssociation is listed in the Resource types pane of the CloudFormation Designer ? Many thanks in advance. I wanted to provide an update and how I have dealt with this in case others in the future have the same question. The simple answer appears to be that the SubnetRouteTableAssociation does not exist within the Resource Types Pane. This does not mean that you can't have one because you can still create the resource manually within the template (writing the code directly). This behaviour does appear at first to be a little inconsisent. One might argue that an association between two entities is not an actual AWS infrastructure object in itself and therefore is not deserving of a widget in the Designer Pane. However, there is another similar resource type that behaves slightly differently - AWS::EC2::SubnetNetworkAclAssociation This resource also does not have an entry in the Resource Types Pane yet when you create it - it does get created within the Designer Pane, but not as a regular looking resource, but rather as a dependency (arrow). It's very easy to overlook but if you 'mouseover' the dependency arrow, the resource name appears. So what is the difference ? Why does SubnetRouteTableAssociation not get any representation in the Designer Pane whilst SubnetNetworkAclAssociation does (even thought that representation is merely an arrow ? My conclusion is because one requires a resource dependency (for resource creation purposes) whilst the other does not. I would prefer that every possible CFN resource is available in the Resource Pane. I hope this explanation/conclusion helps.
STACK_EXCHANGE
I have also tried sudo rfkill, with the same result. Best Wishes, Stephen Last edited by StephenWin; May 18th, 2013 at 12:41 AM. Your name Your email Website By Artem Nosulchik Artem is systems engineer for more than 7 years and holds broad experience in Linux, Unix, Cisco systems administration. Dell may modify the Software at any time with or without prior notice to you. his comment is here Not the answer you're looking for? Command Line User Environment." Back to top #9 paul88ks paul88ks Members 1,201 posts OFFLINE Gender:Male Location:Dallas,Texas Local time:04:13 AM Posted 23 May 2015 - 10:40 PM I used Zorin 32 What would be the possible issues with an IQ based voting system Why do I have two destructor implementations in my assembly output? Details Dell recommends applying this update during your next scheduled update cycle. Register Username E-mail A password will be e-mailed to you. loop lp [email protected]:~$ rfkill list all [email protected]:~$ sudo apt-get autoremove [sudo] password for stephen: Reading package lists... I started with Mint 7 and am now running Mint 12 on my Dell Inspiron. After this operation, 8,982 kB of additional disk space will be used. Contractor/manufacturer is Dell Products L.P., One Dell Way, Round Rock, Texas, 78682. best to you! share|improve this answer answered Jul 21 '13 at 16:17 chili555 27.9k43559 Thanks everyone for all the help. Dell Inspiron 1501 Ubuntu Wifi Driver Again the screen gave this message: Ubuntu 13.04 . . . . If installation was successful, you will see the DSD icon in the Windows taskbar. Dell Inspiron 1501 Wifi Switch Write to me in PM, we will communicate. Leave a reply Cancel reply Your email address will not be published. I just installed ubuntu 11.10 on a dell inspiron 1501 with a dell 1390 broadcom internal wireless card. The use of the program is also subject to the terms of your Service Agreement and Terms and Conditions of Sale (if in the US) or the applicable service agreement and Because this waiver may not be effective in some jurisdictions, this waiver may not apply to you. Dell Inspiron 1501 Ubuntu 14.04 Wireless Drivers There is also at set of native drivers which are available here, but these have note been tested yet. Please keep us posted. Chess, for short people Why shouldn't dogs eat chocolate? In some distributions, such as Ubuntu, you will need to add pci=nomsi to your boot options for the system to boot properly. From the commnad line its /usr/bin/jockey-gtk Hope this helps. Dell Inspiron 1501 Best Linux Distro All the distros correctly recognise my wireless router but I need a wired connection in order to browse the Net. Dell Inspiron 1545 Ubuntu Wireless Driver How can I use powerful NPCs without overshadowing the player characters? The changes are forgotten on reboot. http://jcibook.net/dell-inspiron/dell-1501-wireless-drivers-xp.html We’re sorry, but we are unable to complete your request as this service is temporarily unavailable. Browse other questions tagged wireless upgrade or ask your own question. Thanks. April 19, 201111:51 am by toplinkweb directory Reply Of course I like your web-site, but you need to check the spelling on several of your posts. Dell Inspiron Ubuntu Wifi Not Working On and off Ubuntu said it was connected and the later it wasn't connected to the Internet. If you'd like to become an author just let me know! User contributions on this site are licensed under the Creative Commons Attribution Share Alike 4.0 International License. weblink asdfsdl, 2014/12/06 15:53 nice blog. Preparing to Download... Dell Inspiron 1501 Wireless Driver This Agreement is not for the sale of Software or any other intellectual property. Click OK.5. Try going to this ubuntu forum https://help.ubuntu.com/community/WifiDocs/WirelessTroubleShootingGuide I had the same problem with Dell laptop. wireless upgrade share|improve this question edited Jan 18 '12 at 14:38 Jorge Castro 27.9k98402603 asked Apr 25 '11 at 6:52 Marcelo 26112 2 Welcome to Ask Ubuntu! I had no trouble navigating through all tabs as well as related info. Dell Inspiron 1501 Wifi Not Working Get:1 http://us.archive.ubuntu.com/ubuntu/ raring/multiverse linux-firmware-nonfree all 1.14ubuntu1 [3,943 kB] Fetched 3,943 kB in 4s (813 kB/s) Selecting previously unselected package linux-firmware-nonfree. (Reading database ... 157225 files and directories currently installed.) Unpacking linux-firmware-nonfree Contact Us - Advertising Info - Rules - LQ Merchandise - Donations - Contributing Member - LQ Sitemap - Main Menu Linux Forum Android Forum Chrome OS Forum Search LQ Again, Hadaka, thank you for all of your help. Do you do newsletters? check over here This issue has been fixed since kernel version 2.6.20. they have a good support site also. Please reboot if you have not already and run: lsmod. which is grammatically correct? Show All | Hide All Other versions 18.104.22.168,A00 10 May 201206:51:35 Compatible Systems Inspiron 11z 1121 Inspiron 13z N301z Inspiron 1464 Inspiron 15 M5010 Inspiron 15 N5010 Inspiron M301Z Inspiron M501R I like Mint though, so think you are on the right track in starting there!
OPCFW_CODE
Define your motivation, do your research (& eat your vegetables) TL;DR: Domain knowledge is more important than any code you will write. So become fluent(ish) in the problem space by researching it before you waste your time programing. This post is part of a multi-part series on how to build a data product. Learn more about the series and check out it’s full contents here. So you want to build a data product. Before you even try to touch your favorite text editor (hopefully Spacemacs 👽), you’ve got some work to do. Back when I worked in a biochemistry lab, and an experiment failed, my PI would always point out that often times spending a day in the library can save weeks in the lab (and a fat stack of money). This axiom sits true in any field where there is something worth doing. So lets hit the library. But before you start surfing over their free WiFi, take a seat in front of the whiteboard. It’s good to have a coherent trail of thoughts as you venture into building anything sizable (at least try to leave some breadcrumbs so you can pull yourself back into reality if you get pulled down the rabbit hole too far while programming). My rule of thumb: If it’s going to take longer than half a day, hit the notebook to plan. Step 1 in any plan of mine, state the objective, and WHY you want to get there. Maybe you’re just picking up a Asana task your manager handed you, or maybe you’re buckling down for the plunge into the pothole-filled road of entrepreneurship. Either way it can become the most significant tool in a programmer’s belt to understand the intent behind the task. If your not excited enough by the idea that your hacking will physically move the needle of the organization you’re working for or creating, at least take solace in the fact that any domain knowledge you pick up might reveal the hints for an even more accurate, efficient, & downright elegant solution. Defining your motivation (the why) When I say define your motivation, I don’t mean write down how motivated you are to build this product on a scale of 1-10. You can save that for after work, while you’re pondering the future of your career. I mean why is this a significant problem, why is this a difficult problem, perhaps why has this problem yet to be solved, or why solving this problem will move the needle for the company (or yourself). Pulling a page from business development lingo, products are generally developed & leveraged to minimize a pain or maximize a gain (often times, both) of it’s target consumer/stakeholders. This should come to no surprise, even for developers (at least those who are veterans of the agile process ⛑️). Broadly, for Atomata, the motivation was to create a data product that let any indoor farmer grow like a pro with our hardware. In our case, the pain was that of our clients. Indoor farming is often low margin & unsustainable economically and ecologically. Even lucrative cash crops like cannabis are feeling the squeeze to become competitive as the industry becomes ever more saturated. The gain was to make farms more efficient by the optimization of input costs (labor, nutrients, energy, etc.) and output revenue (crop quantity & quality) by getting data driven practices involved. For my most recent data feature, the motivation was to abstract sensor data into informative alerts to our clients. The pains here are that sensor logs, numbers, and pretty graphs mean very little to stakeholders in a farming operation because no-one knows how to meaningfully interpret them. Also, sensors can be noisy and some fluctuations may be visually misleading. The gains are to be able to alert farmers when environmental or growth metrics have changed significantly, and to be able to perform “feature extraction” on sensor streams for future statistical/ machine learning analysis. These are examples at a higher and lower level of some of my own development motivations. However, at this stage your motivations will likely be flawed, and thats okay. We are currently developing a testable hypothesis. We likely will come back and refine it, sure enough. But now its time to test it with research. Doing your research As the almighty no free lunch theorem illustrates: Any two optimization✝ algorithms are equivalent when their performance is averaged across all possible problems✝✝ ✝Substitute optimization for machine learning if you want to keep our discussion to a subset of optimization algorithms. ✝✝ This even includes the widely forgotten method of uniform stochastic election, also known as random guessing. Thus it becomes mathematically provable that setting out to build a successful model, service, feature, or product is highly dependent on your a-priori understanding of the structure of the problem space. This is where research comes into play. But I don’t yet mean diving into the literature of your favorite tech stack, throwing all of your cares to the wind. Or even diving into technical research papers to pick an optimization algorithm. Perhaps unfortunately for the antisocial among us, its time to get some human contact and survey what people actually need. Was your hypothesis on motivation correct? How about the pains? The gains? First lets build off of the motivation we’ve just built around pains & gains and open the first doors of the discovery period of development. I’ll refer to these doors here as “Auditing the pain(s)” & “Creating the gain(s)”. Since it’s rarely a either/or situation, you should think about both to create the foundation on which you will build any product or feature. I’ll give some examples of times I’ve done each. Auditing the pain At a previous job I was tasked to dive into warehouse data inconsistencies. So I watched warehouse training videos, took a virtual tour of the warehouse in question, and talked to many people who have been to the location or work there in person & over e-mail. From here I was able to map out many possible avenues and make automated models to check several aspects of warehouse performance. Sometimes, you need to put on your detective hat & get your hands dirty to learn what needs to be built in the first place. Here, the service I was developing was motivated by solving a direct pain of the company. Investigative & automation based motivators broadly fall into this category, and you will generally be best off speaking with key stakeholders and sources of knowledge in these processes. This is something to outline as you begin to reach your researching stage. At Atomata, we spent time emailing, cold calling, and social media DM’ing our potential clients, asking them about their frustrations in their current processes & most significant overhead costs. We also dug into literature on the economics & ecological impact of our industry as well as published surveys of our target market and their behaviors. This sort of customer contact is fundamental to any sustainable business, and I am by no means the only one preaching its practice. Creating the gain How are you going to improve a current process in place? At Atomata, the main improvement our clients are interested in is gross profit. At most businesses this is the obvious gain (hence the term “the bottom line”). Its best to be specific, so we can do better. Since costs fall into the “pain” category for the sake of this discussion, we dissected the main revenue generating processes in the farm. These come down to quantity and quality of the crop output. Thus we made sure that if a feature we were building didn’t adequately serve a pain, it had to serve a gain. This governing process has served us very well in our product development pathways. At Atomata, our initial assumption and motivation was that if the stakeholders in these farms could at least have access to the data of their operation they could be more informed and be able to derive value immediately from our product. We were wrong in this first assumption, which brings me to the next step in this process… Eating your vegetables (Or, refining your motivation) You’ve come far. but I’ve got bad news. It’s likely your initial assumptions that directed your motivation are now wrong after research, or could at use use a touch of refinement. Perhaps the pain was not as significant as you thought, or maybe its actually quite multivariate and you need to employ some decisively strategic thinking on where you will initially throw your effort. In our case, our initial assumption was wrong. The solution space was deeper than we thought. This realization lead us to envision and ideate the new products we would have to build to minimize our targeted pains and amplify the gains, and decisively & strategically implement them with our limited resources. The more you refine your motivation, the better product you will ultimately write. This is the pre-programing, “social” optimization step. Even though were not using any rigorous mathematics to justify our position (yet), we can still borrow a tool like stochastic gradient descent to motivate our discussion with stakeholders. While you may be zero-ing in on what you believe to be the ultimate pain/gain/etc in your research, remember to take a step back and occasionally ask questions outside your main train of thought. You may be surprised with what you uncover. The steps and general practices I’ve outlined above will rarely fall to you as a data scientist in industry. However, they will be steps that your project/product manager are employing to better lead and position the team. If this is your case, refrain from hiding in the tech abstraction and make an effort to reach out to those among your team who are actively doing this outreach. Applied data science, unfortunately, is not a pure science. It is intermixed with philosophy, psychology, economics, and sociology. Its okay if you don’t have a mastery in the social sciences, but you have no excuse to hold yourself back from taking the first steps of learning, and ultimately mastering, your problem space. I’d love to hear your process for planning a product or feature before you touch code in the comments section below! Until next time,
OPCFW_CODE
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues. |Product:||platform||Reporter:||Rich Unger <richunger>| |Component:||Window System||Assignee:||issues@platform <issues>| |Issue Type:||ENHANCEMENT||Exception Reporter:| Description Rich Unger 2005-01-11 18:05:36 UTC Creating a singleton TopComponent is a pretty common thing to want to do (explorer window, form palette, debugger stuff, etc). The code necessary to ensure that the singleton whose id is declared in Windows2/Components is the same as any instance that might be returned from in-code invocation is rather complicated and hard for a newbie to grok. It requires understanding a lot about how the window system works. I'd like a subclass of TopComponent that overrides preferredID() and instead delcares abstract String getID(), and which will guarantee that only one instance of that class can exist (and has that ID). Comment 1 David Simonek 2005-05-09 12:42:48 UTC Comment 2 Jesse Glick 2005-11-04 18:23:16 UTC I think Milos was wanting something like this too - the current template for a TopComponent involves too much weird and fragile code. Comment 3 _ tboudreau 2006-05-10 09:54:25 UTC +1 > I'd like a subclass of TopComponent that overrides > preferredID() and instead delcares abstract String > getID() Almost guaranteed that someday, somewhere, someone will decide not to always return the same value from getID(); some way of making it immutable would be nice. May not be worth worrying about though. I'm not sure that really solves the problem, though - there's a bit of a chicken and egg thing here: You need to have the ID to try to look up the existing singleton component (if any), but if getID() is an instance method, you need to create one to find out the ID. Or did I not understand the suggestion? It would definitely be nice to see less generated boilerplate code in new TC's from a template, however we do it. A little reflection could do the trick - have a SingletonTopComponent subclass which tries getClass().getDeclaredField (null, "PREFERRED_ID") in its preferredID() method. The findInstance() method in the template would not be significantly more clunky if moved to WindowManager.findSingleton (Class singletonTCSubclass) (if the PREFERRED_ID field is required, no ID argument is needed to this method - it can look it up). The getDefault() method in the template is simply calling a default constructor - and in fact, it's a defacto memory leak - we could and probably should allow "singleton" TopComponents to be gc'd: Move ResolvableHelper into the SingletonTopComponent class, have and it take a class object. When a TC becomes only weakly reachable, see if writeReplace() returns an instance of SingletonTopComponent.ResolvableHelper. If it does, then we know we can dispose the instance, if not we can hold the return value of writeReplace() and still dispose the TC. That should have at least some value for memory usage, though we should measure how often people have closed singleton TC's lying around and how many of them there typically are. Comment 4 Jesse Glick 2008-04-08 17:59:24 UTC Given that there is an API template for this, can this be closed? Comment 5 David Simonek 2008-10-20 14:51:51 UTC Still I think it would be usable - there is not enough API differentiation between TCs that should act as singletons (view type of window typically) and others (document type of window typically). I remember some VOC about this. richunger, what is your opinion on current state? Do we still need such an API for Netbeans 7.0 or something similar in this area? Thank you. Comment 6 Rich Unger 2008-10-21 03:47:48 UTC I think some differentiation between the two main TC use cases would still be helpful. Relying on template-generated code is pretty weak. Is there a way to utilize Bloch's advice for using enums for serializable singletons in this case? Say, using an enum for a map of singleton TCs, and having the winsys api go there to load and retrieve them? I'm just thinking off the cuff here, so feel free to disregard...
OPCFW_CODE
Installing the VI Power Documenter5 December 2008 · Filed in Tutorial The VMware Infrastructure Power Documenter (hereafter referred to as VIPD) is a nifty PowerShell script that queries VirtualCenter and produces reports of VM configurations, data center inventory, VM stats, etc. Last time I tried to install VIPD, though, I found the installation instructions seriously lacking. Today I had the opportunity to work with a customer to get this installed, and I wanted to share my information here. These instructions are not to be construed as the “official” way of making VIPD work, just what I had to go through in order for it to work. First, you’ll need to download the prerequisites: Microsoft .NET Framework 3.5, OpenXML Formats SDK, PowerShell 1.0, and the VI Toolkit for PowerShell. As far as I can tell, you can install these in any order you want, other than being sure to install PowerShell before trying to install the VI Toolkit for PowerShell. Second, copy the VIPD—specifically, the .PS1 file which is the script itself, the OpenXML PowerTools files, and the DOCX/XLSX formatting templates to a directory on the hard drive. I placed them in the default PowerShell installation directory, but I suppose they could be just about anywhere. The key thing I’ve found is that the script and the formatting templates need to be in the same directory. Next, you need to add the OpenXML PowerTools—included with the VIPD download—so that PowerShell can use them. This is where it starts to get dicey. The instructions call for the use of a tool called InstallUtil, but the instructions don’t provide any information on where this tool is. I found a version that works in C:\Windows\Microsoft.NET\Framework\v2.0.50727. From a command prompt (not PowerShell, as the instructions say), change into that directory and run this command: installutil <Full path to OpenXML Power Tools>OpenXml.PowerTools.dll The instructions said to add another \OpenXml.PowerTools to the end of that command, but I couldn’t make it work that way. Finally, you’ll need to ensure that PowerShell can run scripts (I believe the command is Set-ExecutionPolicy RemoteSigned). Once these steps are done, you should be good to go. After using VIPD for a short while, I’ve also noticed that you can’t specify a full path for the output file; you can only specify a filename. This means the output file will be created in whatever directory you’re in when you run the script. Anyone who has any additional information or clarification, please feel free to speak up in the comments.Tags: VMware · Virtualization Previous Post: VMware SRM 1.0 Update 1 Released Next Post: Mobile Version Launched!
OPCFW_CODE
An avalanche of analysis, impassioned commentary, and angry rants descended upon the tech mediapshere over the two past weeks ever since One Laptop Per Child Chairman Nicholas Negroponte urged developers for the XO laptop (formerly the ‘$100 laptop’) to recreate the student computer’s user interface for Windows XP rather than Linux. That decision led to the defection of Walter Bender who had been OLPC’s president of software and content and a longtime colleague of Negroponte. It also led free software guru Richard Stallman, who ironically switched to a XO laptop himself just before the announcement, to ask out loud, “Can we rescue OLPC from Windows?“ Like Stallman, many other free software advocates argue that one of the principal goals of the One Laptop Per Child Project was to free students from proprietary software, which they are not able to alter to suit their own needs. What has been missing from all sides of the debate so far is that, no matter if the XO user interface runs on Windows XP or Linux, it is still currently missing many applications to help students learn and participate in the classroom. In hope of getting local Uruguayan programmers to develop educational applications for the XO laptop, Rising Voices grantee Pablo Flores of the Ceibal Project, is organizing a programming “jam” this weekend in order to introduce local programmers and get them thinking about developing innovative applications that particularly suit the needs of the hundreds of thousands of Uruguayan students who now carry their bright green laptops to school each day. What follows is a translation of Flores’ original announcement in Spanish. The time has arrived to make some new applications for XO laptops. Uruguay is in a privileged position, since our high density of XO laptops gives us a large user base who can use our software. To put it another way, Uruguayan programmers have the double benefit of being able to both provide practical solutions to meet the educational (and other) needs of our country and, at the same time, distribute their applications to the entire world. To facilitate the exchange, with the support of LATU and the Faculty of Engineering, a gathering called Ceibal Jam! is being organized this weekend for developers interested in programming applications for the XO. Information is constantly being updated on the wiki, where you can register and participate. The purpose of the meeting is to make initial contact between those interested in developing on the XO platform and to start working on some interesting applications. To do so, we are going to host some introductory talks and workshops, and then organize in small teams focused on specific development goals. In particular, there is an initiative to develop a system to facilitate the creation of blogs from the laptop, which would increase the number of blogs authored in schools throughout the country. “Jam!” meetings take place throughout much of the first world and consist of gathering people with common interests to work intensively to create something together. Its origin comes from jazz groups, which conducted improvisational “Jam sessions,” usually after a concert. It is a new mdoel in our country, but we hope that is not the last meeting of this kind. Be sure to attend!
OPCFW_CODE
Hardware: Linux rk64 4.4.77-rockchip-ayufan-136 #1 SMP Thu Oct 12 09:14:48 UTC 2017 aarch64 GNU/Linux 4GBs RAM OS: see above Java Runtime Environment:java version “1.8.0_151” Java™ SE Runtime Environment (build 1.8.0_151-b12) Java HotSpot™ 64-Bit Server VM (build 25.151-b12, mixed mode) openHAB version: 2.2.0-1 Issue of the topic: Ive installed OH2 in Rock64… but cannot detect Zwave stick Bus 003 Device 002: ID 0658:0200 Sigma Designs, Inc. cponte124@rk64:~$ dmesg|grep acm [ 6.381166] cdc_acm 3-1:1.0: ttyACM0: USB ACM device [ 6.387656] usbcore: registered new interface driver cdc_acm [ 6.391481] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters cponte124@rk64:~$ ls -lisa /dev/ttyACM0 13875 0 crw-rw-rw- 1 root dialout 166, 0 Jan 5 11:50 /dev/ttyACM0 cponte124@rk64:~$ cat /etc/default/openhab2 cponte124@rk64:~$ cat /etc/group|grep openhab !!! Ive tested that OZWCP works OK with my STICK in this machine but OH2 keeps telling port “does not exist” 2018-01-05 11:34:09.260 [hingStatusInfoChangedEvent] - ‘zwave:serial_zstick:casaponte124’ changed from OFFLINE (BRIDGE_OFFLINE): Controller is offline to OFFLINE (COMMUNICATION_ERROR): Serial Error: Port /dev/ttyACM0 does not exist 2018-01-05 11:35:52.951 [hingStatusInfoChangedEvent] - ‘zwave:serial_zstick:casaponte124’ changed from OFFLINE (COMMUNICATION_ERROR): Serial Error: Port /dev/ttyACM0 does not exist to UNINITIALIZED 2018-01-05 11:35:52.958 [hingStatusInfoChangedEvent] - ‘zwave:serial_zstick:casaponte124’ changed from UNINITIALIZED to UNINITIALIZED (HANDLER_MISSING_ERROR) I do agree that you can rule out hw zwave problem and I do not think that you will get a whole lot more information setting debug level to TRACE in Karaf either. I seem to remember that there have been some compatibility issues with nrjavaserial on 64-bit ARM java, maybe the same problem for your platform… have this ever worked for you and would it be worth a try to switch to 32-bit java? The post below is a bit “old”, but it might still be the case I too am having this same problem. I have checked permissions for the ports, ensured openhab is in the tty and dialout groups, and checked /etc/default/openhab2 Java OPTs. Everything is according to the advice of the experts. OH2.2 simply doesn’t work on my RPI3. I’ll be keeping an eye out for OH2.2.1, hopefully it will solve this problem. Until then I will stick with v2.1 I’m not sure is the same problem because with Rpi3 you’ll have a 32bits OS… ¿is it correct? … and in this case i think it could be related to the cpu architecture Have you tried to do a clean installation of 2.2? (i’ve had problems with packages update in my RPI3 from 2.1 to 2.2… and I reverted to 2.1 in RPI3… then i installed rock64 from scratch with 2.2 and worked ok… almost everything but this problem with /dev/ACM0). I have spent the past few days doing a clean install of openhab2.2/openhabian1.4. At first it went well. I installed the Zwave binding and configured the Things. Then I rewrote my config files using the new channel info. The next step was the Mysensors binding. That took a couple of days. I had to enable the mysensors-serial-port through the karaf console, then install a jar file with the latest MySensors binding. That went successful up until I rebooted the system. After a reboot the Zwave binding and the network binding disappeared. Completely gone. I had to reinstall them. When I did the Zwave serial port was once again not there. I couldn’t type in a serial port in the configuration window. It just gave me a dropdown list of /dev/ttyUSB0. If I unplugged the MySensors gateway and rebooted then I could manually enter a serial port (/dev/ttyACM0) but the system still insisted it didn’t exist. The same old problem as before. Basically all the work I had done for the past 3 days was gone. Sorry. I feel guilty for telling you to try the clean install. I only wanted to help. In the past when i had problems with “Ghost” config that disappears after reboot… they were caused by FS corruptions (too common in rpi)… but i can’t be sure in your case. Maybe all these problems are caused by an oh2.2 bug. My problem still is here and i got back to my old rpi3 with oh2.1 instead of rk64 with oh2.2. when i hace more free time maybe i’ll try again with 32bits JVM, etc. Mine to until I added the LG serial. If I remove LG serial binding it starts to work again. I am sure it’s the same for mysensors. Right now I have to remove the LG serial binding before restart and then re add it again when OH has started… I’ve decided to stick with v2.1 until the bugs are worked out of v2.2 I have enough work to do to learn writing rules. My Unix skills are at the beginner’s level and despite the statement on the openhabian page, you have to be a Unix enthusiast to effectively manage a Raspberry Pi based system. Besides, my current system isn’t broken, so I’m not going to mess with it. Upgrading was a nice to have, not a necessity. Just for the heck of it I decided to install Openhab 220.127.116.11 on a computer running Ubuntu 16.04. I followed the directions in the forum. Openhab is in the dialout and tty groups, EXTRA JAVA ARGS are as specified in the instructions. The first time I plugged my AEON Z-stick into the USB slot everything worked fine. I then attached the Mysensors serial gateway into another USB port and it worked. Then I shutdown the system. When I restarted the system with just the Zwave stick in a USB port Openhab claimed “/dev/ttyACM0 does not exist”. This problem isn’t confined to the raspberry pi.
OPCFW_CODE
More than 20 years after its debut, java remains a popular programming language, including for android apps learn more about java's principles and history. Oracle technology network is the ultimate, complete, and authoritative source of technical information and learning about java. This feature is not available right now please try again later. Get an introduction to the structure, syntax, and programming paradigm of the java language and platform in this two-part tutorial learn the java syntax that youre. Java is a programming language and is commonly used for developing and delivering content on the web it is an object-oriented language similar to c++, but simplified. Java tutorial or learn java or core java tutorial or java programming tutorials for beginners and professionals with core concepts and examples covers the basics and. This page is your source to download or update your existing java runtime environment (jre, java runtime), also known as the java plug-in (plugin), java virtual. Java: java, island of indonesia lying southeast of malaysia and sumatra, south of borneo (kalimantan), and west of bali java is only the fourth largest island in. If java is working, you will see a pink rectangle above with one line of text that says something like: java version 180_25 from oracle corporation or. Java software allows you to run applications called applets that are written in the java programming language these applets allow you to have a much richer. The java space contains technical articles, blogs and discussion forums with questions and answers about java technologies. Download java runtime environment for windows now from softonic: 100% safe and virus free more than 756 downloads this month download java runtime environment. Learn java online from 122 java courses from top institutions like duke university and university of california, san diego build career skills in computer science. The best free java software app downloads for windows: dj java decompiler java launcher javajar javaexe java se development kit text editor free json. Tutorials and reference guides for the java programming language. Understand java is indonesia's fifth-largest island its 130 million people make up 65% of indonesia's entire population, and makes java the most populated island. Java standard edition (se) is a free software bundle that provides the java runtime environment and the libraries and components you need to display a wide. - This free java tutorial for complete beginners will help you learn the java programming language from scratch start coding in no time with this course. - Java tutorial for beginners - learn java in simple and easy steps starting from basic to advanced concepts with examples including java syntax object oriented. - Learn how to install java on your pc so you can run apps that require in internet explorer. - Videos from the java community and oracle about everything in the java ecosystem submit your video send url of your java video to otnfeedback_usatoracledot. Java (indonesian: jawa javanese: ꦗꦮ sundanese: ᮏᮝ) is an island of indonesia at about 139,000 km 2, the island is comparable in size to england, the us. Free download java jre 904 / 10 build 45 early access / 11 build 2 early access - a runtime environment allows end-users to run java applications. 194k tweets • 4,424 photos/videos • 377k followers how to best connect with the java community join or reconnect with your local #java user group twitter list. Java concepts java was developed to achieve 5 main goals these are: it should be simple, object-oriented, distributed and easy to learn it should be robust and secure. With our interactive java course, you’ll learn object-oriented java programming and have the ability to write clear and valid code in almost no time at all.
OPCFW_CODE
First step to create an iPhone app In this tutorial, you will get to know the overview of the basic structure of a simple iPhone app. if you don’t aware about what tools you required to develop an iPhone app then go through this link “What you required to Begin iOS Programing.” Here we start… To understand the structure of an iPhone app, Lets create a sample project. Open Xcode and create a new project. Select Single view Application and Press Next button. See below screen : Fill the following information as shown in below image : Here you will get basic information about fields which we filled up in above image. - Product Name : “Your Project Name” [Here “HelloWorld”] - Organization Name : “Your Organization Name“ - Company Identifier : “Domain Name“ Any domain name but remember , what ever domain name you set it will set in “Bundle Identifier” so choose it wisely - Bundle Identifier : It is a unique identifying string which is used to locate an application’s bundle at run-time. A bundle identifier , must be registered with Apple and It should be Unique to application. Bundle identifiera normally [Not Always] written out in Reverse DNS notation. (For example : co,.mycompany.helloword) - Device : iPhone Note : For this project Unchecked [Story-board and Include Unit Testes] field. we will create projects on storyboard in future, but for now, Keep Unchecked these options. Click next Button.. Select the place where you need to store the Project and Click “Create”. Its Done !!!! Simple right ? Now lets see what our project contains : But before that refer this image which will give you information about Xcode structure : In you project , Left most part you will see Project Navigator In which , you see following groups with some files. Expand each and every group (as shown in below image : ) Here we will Go through each file and understand for what it exists for and what its use : - Xib/Nib : It contains UI-elements and other UI-control’s. It will show you pictorial view of an object. See Below Image : - info.plist : It conatins all configuration details regarding application. - main.m : the execution of an Objective-C app begins from main() - _prefix.pch : This file is included at the top of every translation unit [Means Every compiled objective c File] Normally it is used to import framework headers, by which we don’t have to type #import <Foundation/Foundation.h> in every file. - AppDelegate : This file is responsible for launch-time initialization. Primary job of this object is to handle state transitions within the app. Responsible for launch-time initialization and handling transitions to and from the background. - Controller objects : This type of files are intermediary between Model & View. It Updates the view when the model changes. It also Updates the model when the user manipulates the view. So in short we can tell that, this file contain app logic. - View of app : Present the Model to the User in an appropriate interface Allows user to manipulate data Easily reusable & configure to display data You’ve now reached the end of the introduction of an iPhone app structure. i think this much information is sufficient to make your first iPhone app. In Next Tutorial we are going to create our first iPhone app. Before moving to next tutorial if you have any questions or comments about what we have finished so far, then delight let us know.
OPCFW_CODE
Group theory in change Years ago, when I first encountered this material in Watzlavick’s “Change”, I had a feeling that it’s great, but I can’t quite grasp it in its fullness. Like there was a lot more to be understood for changework there, but I lacked the perspective for it. I’ve recently come back to it, I think I get what it’s saying now, but, honestly, there are probably quite a few much smarter people among you here, so I figured I’d throw it out for everyone, as a useful thing to start with. Now, the basis of this is group theory, with a little bit of class theory added in. Group theory, as first conceived by the tragically brilliant Evariste Galois (and polished by his many successors), which – in a bit of a simplification, goes like this… - Any group is composed of members which are all alike in one common characteristic (all other aspects of their identity notwithstanding). So we can have groups of people, emotions, numbers, whatever you like. Furthermore, a combination (adding or subtracting members) of any members will still produce a member of the group. Now, this little thing is extremely important because as we will see, it will have a huge impact on group theory in changework. (Basically, since any combination of group members still leads us to the group, this limits the possible changes when basing them on members of the group.) - Members can be combined in a varying sequence, but the outcome of this combination remains the same, no matter how the sequence changes – as long as all the same members are combined. 4+ 3 + 1 + 2 = 1 + 2 + 3 +4 - A group contains an identity member, a member which, when combined with another member, maintains the identity of the other member. In a group of added numbers it’s 0, 1+0 = 1, 5+0 = 5. In a group of multiplied numbers it’s 1, 1x1 =1, 5x1 = 5, etc. In a group of people, it would be no one. John + no one = John. Etc. For changework it’s important in understanding that a member of the group may act and lead to no change in the identity of another member. There are “null&void” actions available, which can be mistaken for actions with actual consequences. - In any system which can be described as a group, each member has an opposite, where the combination of a member and its opposite gives us the identity member. Now for addition this would be (-member), so 5+(-5)=0, which we know to be the identity member. (This is the point where we easily fall into typical opposites if we concentrate on addition alone, but take care that other combination rules could apply. For example, if the combination rule is multiplication, like in the group of 2, 4, 8, 16, etc. the identity member would be 1, so the opposite member for member 4 would actually be 0.25 ) For changework this is important to point out that what seems like significant change might still have us ending up in the middle of the group we had hoped to leave. Now group theory is very useful for thinking about people’s situations, problems, the states of dynamic systems. What it does not actually allow us to do, is to think or talk about things that exceed the groups. For that, we need to talk about logical types, with its primary rule of “whatever involves all of a group must not be of the group”. So, why care about this, if you are not one of the mathematically inclined? I’m not, to be honest, but I could see that there is something here, even years ago when I couldn’t quite put my hand on it. Now what this gives us, for changework, is this- when people experience problems, their reactions to these problems tend to form a specific group. And when change often gets stuck, is because people tend to solve the situation while keeping within the group. (This is not to say, that problems cannot be solved within the group, by such a first-level solution. Often they can, but, honestly, people will rarely need assistance for that category of problems, so it’s not something a changeworker is likely to encounter.) In fact, what most people tend to do in such situations, is some variation of rule two – they attempt to change the sequence of the solutions. But, as long as the solutions are the same and all belong to the group, no change in the sequence will lead to a different result. It’s like taking a step into each of the cardinal directions – as long as you take exactly one step and the road isn’t blocked, you’ll get back to where you started, no matter if you did NSWE, ESNW or any other combination. For these situations what must be done is to cause a change from outside the system, a second-level solution. For that you need to be able to map the original group, together with it’s identity member (and the opposites that come with it) and try to map structures above and beyond the original group. It seems a bit theoretical at first, but when you start to dig into it, it starts being a really useful perspective. As I said, a lot of you folks are smarter then me, so I figured you might find it useful to dig your teeth into this.
OPCFW_CODE
What's the main difference between Java SE and Java EE? What's the main difference between Java SE and Java EE? http://www.daniweb.com/forums/thread97463.html Java SE vs Java EE Java SE (formerly J2SE) is the basic Java environment. In Java SE, you make all the "standards" programs with Java, using the API described here. You only need a JVM to use Java SE. Java EE (formerly J2EE) is the enterprise edition of Java. With it, you make websites, Java Beans, and more powerful server applications. Besides the JVM, you need an application server Java EE-compatible, like Glassfish, JBoss, and others. Java SE stands for Java standard edition and is normally for developing desktop applications, forms the core/base API. Java EE stands for Java enterprise edition for applications which run on servers, for example web sites. Java ME stands for Java micro edition for applications which run on resource constrained devices (small scale devices) like cell phones, for example games. As far as the language goes it is not as though java changes. Java EE has access to all of the SE libraries. However EE adds a set of libraries for dealing with enterprise applications. Java EE is more like a "platform" or an general area of development. In Java SE you write applications that run as standalone java programs or as Applets. In JavaEE you can still do this, but you can also write applications that run inside of a Java EE container. The container can do a great amount of management for you such as scaling an application across threads, providing resource pools, and management features. Java EE has a web framework based upon Servlets. It has JSP (Java Server Pages) which is a templating language that compiles from JSP to a Java servlet where it can be run by the container. So Java EE is more or less Java SE + Enterprise platform technologies. Java EE is far more than just a couple of extra libraries (that is what I thought when I first looked at it) since there are a ton of frameworks and technologies built upon the Java EE specifications. But it all boils down to just plain old java. you know that EE stands for 'Enterprise Edition' right? And it's not one product but a set of products. @Mohiul This is a great response because you talk about the relationship between Java EE and Java SE. The former is basically a superset of the latter. I think, this must be marked as correct answer, as it gives complete information. Java SE refers to the standard version of Java and its libraries. Java EE refers to the Enterprise edition of Java which is used to deploy web applications. Why on earth would someone down vote this? Did i provide false information? Some people are just weird... I was about to downvote (but in the end I didn't, I decided to write this comment instead) because "web applications" is just one of the several situations in which you would need a server (thus you would use EE instead of SE). by 'web application' i didn't only mean 'web sites'. I should have said, server applications to be more clear, you are right Java EE is enterprise edition. Includes jsp, servlets, beans, and some other stuff for server programming. Java SE is standard edition. This is plain old Java. Includes GUI stuff. First, J2SE and J2EE have been renamed. They're now Java SE and Java EE. Essentially, Java SE is your standard Java designed for end-users. That's what you'd develop to for desktop applications. Java EE is the enterprise edition, designed for server programming, such as SOA and web applications. Everyone still uses the old names though! @John: No! Old names are bad! You will accept whatever garbage Sun's marketing department feeds you, and you will like it! What does SOA exactly mean ? Best description i've encounter so far is available on Oracle website. Java SE's API provides the core functionality of the Java programming language. It defines everything from the basic types and objects of the Java programming language to high-level classes that are used for networking, security, database access, graphical user interface (GUI) development, and XML parsing. The Java EE platform is built on top of the Java SE platform. The Java EE platform provides an API and runtime environment for developing and running large-scale, multi-tiered, scalable, reliable, and secure network applications. If you consider developing application using for example Spring Framework you will use both API's and would have to learn key concept of JavaServer Pages and related technologies like for ex.: JSP, JPA, JDBC, Dependency Injection etc. What is the description then - don't just give a link - and is that answer different to older answers here @Mark i've updated answer especially for you, but i still think content duplication is bad idea. See this site's help "Links to external resources are encouraged, but please add context around the link so your fellow users will have some idea what it is and why it’s there. Always quote the most relevant part of an important link, in case the target site is unreachable or goes permanently offline. " JavaSE and JavaEE both are computing platform which allows the developed software to run. There are three main computing platform released by Sun Microsystems, which was eventually taken over by the Oracle Corporation. The computing platforms are all based on the Java programming language. These computing platforms are: Java SE, i.e. Java Standard Edition. It is normally used for developing desktop applications. It forms the core/base API. Java EE, i.e. Java Enterprise Edition. This was originally known as Java 2 Platform, Enterprise Edition or J2EE. The name was eventually changed to Java Platform, Enterprise Edition or Java EE in version 5. Java EE is mainly used for applications which run on servers, such as web sites. Java ME, i.e. Java Micro Edition. It is mainly used for applications which run on resource constrained devices (small scale devices) like cell phones, most commonly games. In Java SE you need software to run the program like if you have developed a desktop application and if you want to share the application with other machines all the machines have to install the software for running the application. But in Java EE there is no software needed to install in all the machines. Java EE has the forward capabilities. This is only one simple example. There are lots of differences. Could you clarify that? Java EE needs a JVM just like Java SE. What are "forward capabilities"? The biggest difference are the enterprise services (hence the ee) such as an application server supporting EJBs etc.
STACK_EXCHANGE
Bug with position in 360° equirectangular recording option TLDR: All 6 camera positions in the 360° equirectangular recording options need to be moved back by 0.610 blocks. A small change needs to be made to the 360° equirectangular recording option, the camera needs to be moved backwards by 0.610 blocks. This is because Minecraft's camera sphere (The furthest away area that the game's camera can move to) is 0.610 blocks big. This is fine for normal recording, as it simulates your head being slightly ahead of your body's core, but for 360° recordings, it is a problem that needs to be fixed. In case my words aren't clear enough, here is a demonstration that I took using version 3.6.1-ShaderSync. The coordinates for this first image are X: 401.0, Y: 126.610, Z: -5.5 As you can see in this image, the camera facing down is perfectly inside the block, despite the fact that it should be 0.610 above the block. This is how I determined that the camera needs to be moved back by 0.610 blocks. The coordinates for this second image are X: 401.0, Y: 126.611, Z: -5.5 Here I have moved the camera 0.001 blocks upwards, and now the camera is 0.001 blocks away from the block below the camera. Again, the Minema's camera position here is 0.611 blocks above the grass, but it appears 0.001 blocks away because of this bug. You can also see in this image that the cobblestone block closest to the camera has been chopped in half. This is also why none of the positions line up. The coordinates for this third image are X: 401.5, Y: 126.8, Z: -5.5 This is a best-case scenario. The camera in Minema has been moved multiple blocks above the grass below it, so that hopefully the effect is less noticeable, however you can still definitely see the bug. The cobblestone block from the second picture has also been fixed, however, this is because Minema's camera position has been set to be in the middle of the block, because of course, the edges of the frames are at 45° and so the edge of the block is all within one frame. If you guys need any more info to fix this issue, please ask. I am happy to provide them. @NyaNLI you might want to take a look at this. Sorry, no offense, but this issuse is caused by this code in your shaderpack position = gbufferModelView * position; position.z += 0.1; position = gbufferModelViewInverse * position; And about the first image, this is because OpenGL discards samples with a depth less than a specific value, for Minecraft this value is 0.05 (after linearization). I can't fix it, sorry. Wow, that’s fascinating! Thanks for a quick reply, and I suppose it’s not Minema’s bug/issue. Although, @RYRY1002 thank you very much for such detailed overview in the original message. I suppose you can check whether same shader is having an issue with ReplayMod. If it doesn’t, I suppose it artificially raises camera position or something similar. Yeah, it's fine. This is a shaderpack I am making myself so I can change the code very easily. Yeah, thanks for the quick response, don't see that on most Minecraft mods. Turns out I had implemented that exact code from before Minema had 360 because the mod didn't do that by itself. Now, of course, it does. So what was happening was that Minema and my shader were both doing that code, overdoing it. After removing the code, the issue was resolved. Thanks for the help. @NyaNLI deserves to be a collaborator I think. Awesome! I'm glad the issue was resolved! 🥳 @NyaNLI deserves to be a collaborator I think. Indeed, however, I can't grant collaborator in this repository. Only @daipenger can. @NyaNLI if you want, I can give you collaborator role on mchorse/minema repo. Thanks for your recognition, but it‘s not necessary I think. My name is already in the author list, so I think that's enough for me. :)
GITHUB_ARCHIVE
... very unforgiving, unashamedly harsh and without giving out any sort of initial crutch, but an excellent and clever idea that's very well executed and is much more interesting than most games out there, this is very, very good. Allow me to detail my review: - Graphics, oh boy, this is just too clever. You deliberately use a isometric projection to create an intensely tricky illusion that's very hard to deal with and requires readjusting our mental processes so that we can make progress. That's the core essence of the game and, as such, it's vital that the projection is spot-on, as it is in this case. Kudos on that alone! Now, the actual pixel art is pretty good, but not exceptional. The designs are somewhat sparse and functional, which I guess helps to keep the focus on the puzzle resolution, but a bit more eye candy throughout could have made the game even better. I will say that the outside the 'main puzzle areas' there are a few very nice areas, though. - Music is pretty solid, it fits the tone of the game well and is well used. The sounds effects are very '8-bit' and fit the bill very well. - Gameplay, oh boy. It's very interesting to be able to explore the illusions presented and I think you provide us with a good diversity of challenges, mixing things up just about the right amount, to keep things interesting throughout. But it really is very unforgiving, a necessary consequence from the lives system, clearly an attempt to stop people from simply brute-forcing the solutions. In other words, it works, but it's a tough game. - Story-wise, well, it's very bare-bones, so keeping things vague and allusive is a good choice; it sets the right tone for the game and doesn't distract too much, which is probably ideal in a (relatively) short game like this. I like the comments that Naya makes, they usually work well. - Finally, the pacing and length are spot-on, again. The game could have been longer, a lot longer in fact, as there's plenty of ways to expand on the gameplay, but that would have required a lot more effort, so given the chosen length, I think the pacing is very good, providing infrequent stops to help create an atmosphere (but see above about eye candy) and give the players save points. So, to summarize, a clever idea that's very well executed and a challenging and demanding game, that's fun, frustrating and interesting to play; well polished, but could have been a bit more rewarding to the players in some ways. Finally, a note on some of the frankly abusive reviews below: this is not a game for everyone and does it show, unfortunately. By not providing a clear tutorial and/or demonstration of the game mechanics, a lot of people have failed to grasp the very basics of the game, and even some that clearly understand how it works, have despised it, which is a shame. The former are reprehensible (in that modern games are too generous in that sense and have spoilt people a bit too much), the latter more understandable, since, as I mentioned right at the start, this is a surprisingly tough game; the mental adjustment necessary to be able explore the game and succeed is non-trivial and the game is frustrating in many ways, so a lot of patience is required; sadly, I guess most people's expectations of a flash game preclude the notion of having to have patience, with the resulting reviews. So I say kudos and congratulations, Terry, this is a great game that deserves praises for executing a deceptively clever idea very, very well.
OPCFW_CODE
import collections from codega.visitor import VisitorBase from codega.ordereddict import OrderedDict REQUIRED = 0 OPTIONAL = 1 reserved_property_prefixes = set(('__', 'ast_')) def is_reserved(name): for prefix in reserved_property_prefixes: if name.startswith(prefix): return True return False class AstNodeBase(object): ast_location = (None, None) def __init__(self, *args, **kwargs): assert hasattr(self, 'ast_class_info') self.ast_class_info.map_properties(self, args, kwargs) def __str__(self): return "%s(%s)" % (self.ast_name, ', '.join('%s=%r' % (k, v) for k, v in self.ast_properties.items())) __repr__ = __str__ class AstList(AstNodeBase, collections.Sequence): def __init__(self, *args, **kwargs): body = kwargs.pop('body', ()) if isinstance(body, AstList): body = body.data if 'head' in kwargs: head = kwargs.pop('head') body = (head,) + body if 'tail' in kwargs: tail = kwargs.pop('tail') body = body + (tail,) AstNodeBase.__init__(self, data=body) def replace(self, data): return self.__class__(body=tuple(data)) def __getitem__(self, index): return self.data[index] def __len__(self): return len(self.data) class Info(object): '''Stores information about an AST class''' def __init__(self, name, properties, base=AstNodeBase): self.__base = base self.__name = name self.__properties = OrderedDict(properties) @property def name(self): return self.__name @property def bases(self): return self.__bases @property def properties(self): return OrderedDict(self.__properties) def map_properties(self, obj, args, kwargs): res = OrderedDict() args = list(args) kwargs = dict(kwargs) for name, klass in self.__properties.iteritems(): if kwargs and name in kwargs: res[name] = kwargs.pop(name) elif args: res[name] = args[0] del args[0] elif klass == OPTIONAL: res[name] = None else: raise ValueError("Cannot handle required argument %s" % name) setattr(obj, 'ast_properties', res) for key, value in res.iteritems(): assert not hasattr(obj, key) setattr(obj, key, value) def get_class(self, metainfo): if not metainfo.has_class(self.__name): self.__create_class(metainfo) return metainfo.get_class(self.__name) def __create_class(self, metainfo): members = {} members['ast_class_info'] = self members['ast_name'] = self.__name return metainfo.define_class(self.__name, (self.__base,), members) class Metainfo(object): '''AST node-set meta information. Contains the AST node classes.''' def __init__(self, metaclass=type): self.__metaclass = metaclass self.__classes = {} def register(self, name, cls): self.__classes[name] = cls def get_class(self, name): return self.__classes[name] def has_class(self, name): return name in self.__classes def define_class(self, name, bases, members): cls = self.__metaclass(name, bases, members) self.register(name, cls) return cls class AstVisitor(VisitorBase): '''AST specific visitor class. For AST classes the visitor will use the name of the class, for other types the MRO will be used as in the ClassVisitor.''' def aspects(self, node): return [node.ast_name]
STACK_EDU
Skeletal animation demo It would be cool to have a demonstration showing how to implement something like skeletal animation/vertex skinning on top of regl. There is a half baked version on the skeleton branch currently, but it is currently blocked by the collada parsing being incomplete. If we can fix that up, then making a demo shouldn't be too hard. Here's the issue for fixing the parser -> https://github.com/chinedufn/collada-dae-parser/issues/3 What was wrong So it turns out that the issue was my ignorance of control (non-deformation) joints like IKs, pole targets, etc. It turns out that the collada format doesn't do much to support these. The model that we were trying to use had these types of control joints, so it just didn't work when we tried a few months ago. The parser will now throw an error if you attempt to use a collada file that contains control joints. Instead, you must first bake the effects of your control joints into your deformation bones. Docs on how to fix a model -> https://github.com/chinedufn/collada-dae-parser/blob/a9c6586b9e6f520d2c773fce73a24d8125427af7/docs/blender-export/blender-export.md#control-joints Path Forward We already have a reference vertex shader and fragment shader so we'd effectively be copying and pasting them into a regl program. From my experience, COLLADA is the worst possible format for loading assets because it's so insanely complicated (it's only great for exchanging assets between applications). If it's for a demo with a single hardcoded model, consider something like GLTF. I wrote my own COLLADA loader/converter that loads a COLLADA file and exports it to a custom JSON+binary format that's super easy to parse in browsers. See this live preview and the github repo. The live preview renders animated models, and I also have a highly experimental branch with support for animation blend trees (see below image for an example of smoothly blending between standing/walking/running animations). Feel free to have a look at any of this and scrap any usable code from it. I bought the model from bitgem.com, so I can't share it I would second glTF. Here is a glTF repo with sample models @chinedufn what are your thoughts on adapting: https://github.com/chinedufn/skeletal-animation-system to use regl? I was just considering this but haven't wrapped my head around it yet. @crobi @vorg for sure for sure, wouldn't hurt to have examples for how to deal with both file formats though. But yeah just a question of where to start. @kevzettler if you're interested in doing that I'd be happy to answer any questions / provide advice on how to get started / help in any way that I can. I ported @chinedufn 's skeletal-animation-system. To regl see: https://github.com/chinedufn/skeletal-animation-system/pull/3 I am interested in feedback on the demo and further discussion here. We could use that as the example. It is a good start, but I feel there is room yet for a more simplified example. The skeletal-animation-system demo has code for texture support, a camera, and dual quaternion animation joint blending. I can picture a more simplified demo with a single, untextured, animated skinned mesh. Additionally, I feel the Blender -> collada -> regl/WebGL work flow is not the best learning example. I started it on a personal model and found the manual conversion of IK joints to deformation joints to be real tedious. On the topic of GLTF. Khronos group is currently requesting quotations on a contract project, to build a Blender -> GLTF exporter. See: https://www.khronos.org/rfq/request-for-quote-blender-gltf-exporter . You could build out the exporter and test it against regl. It could be a good regl mindshare building opportunity. I"m considering placing a bid but have no python experience and minimal blender plugin experience. I would be straight winging it. I wanted to put it on your radar (@vorg @crobi ) the Blender -> collada -> regl/WebGL workflow is not the best learning example The glTF Blender exporter is now in alpha, and does support morph targets: https://github.com/KhronosGroup/glTF-Blender-Exporter/ Does the skeleton branch mentioned in the OP still exist somewhere? @donmccurdy if you're looking for regl skeletal animation support I did put together a demo for the skeletal-animation-system module at: https://github.com/chinedufn/skeletal-animation-system/pull/3 I'm not sure why it was removed form the master branch of the repo but it was working with at skinned model. I don't quite remember why I removed it, but I think I was planning to re-write the demo site for some reason but then I changed languages and stopped using skeletal-animation-system.
GITHUB_ARCHIVE
If there’s one thing entrepreneurs are good at it’s pouring time and money into a product idea only to launch and find that a paying audience does not exist! Sometimes that is because we are overeager to get started doing what we love–designing and developing–and sometimes it’s because we don’t want to believe that our great idea doesn’t have a market. Validating a product idea isn’t easy. Even if you can validate the direction enough to start building a product, constant testing and refinement will always be needed. The idea and direction you start with will need to change and mutate as new insights are discovered by building a product and talking to customers. Never think that you can plan enough at the beginning to grow on auto-pilot with zero strategy change. One of my favorite quotes comes from Eric Ries in The Lean Startup: “Unfortunately, too many startup business plans look more like they are planning to launch a rocket ship than drive a car.” -Eric Ries in ‘The Lean Startup’ What he is referencing is how too many entrepreneurs start a venture with excessive planning instead of building a product and validated learning. In The Art of the Start by Guy Kawasaki, one of the first snippets of advice he offers is don’t write a business plan! Instead, get started with the product idea and, if successful, write a business plan later. A business plan and excessive planning will be pointless if it turns out your product idea turns out to be bust. So how should you start? How to validate a product idea Well, I like to start with an idea and validate that idea with as little work as possible. Validating can take several forms, including talking to your friends and peers, understanding a market from personal experience, and testing the online community. - Talking with people you know personally is always a good idea as doing so will help frame your thoughts and present the concept in a solid manner. Friends can be forgiving if you don’t know how to clearly communicate, and willing to help refine your vision. - Having a strong understanding of a problem and product solution personally is the best way to validate a product idea, at least as a direction. If you don’t understand a subject, like money management, and all the ins-and-outs and complexities then maybe you shouldn’t try to build a web app for money managers. Instead, focus on something you are not only personally interested in but a semi-expert. - Validating online is essential as doing so will reach a broader audience and you will receive a wider range of input (good and bad, you may find that friends don’t want to hurt your feelings and won’t say no to your ideas). The easiest way to validate an idea is to see how many people are interested. Doing so is not a sure-fire success story, but the fastest way to test the feasibility of a product idea. This involves two steps: - Building a marketing page. Create a simple marketing page explaining the product, what pain it solves and benefits expected, and possibly showing fake product shots. - Drive traffic. Drive traffic to the marketing page with one conversion goal in mind, typically to get visitors to sign up for an email list. Though, I have seen people actually accept payments in advance of building a product. Whatever your direction, stay simple and go for one conversion goal. After testing several product ideas myself, I decided to convert the code I kept using into a conversion page boilerplate called Breadbox. Breadbox is an open source project that makes spinning up a simple product page super simple. It’s not fancy at all, and that’s okay. The default package offers a simple logo placement, tagline, and email field sign up field. That’s it! The Breadbox Github repo has more information and suggestions on how to personalize a fork of Breadbox, but to start and validate an idea the less work you put into it the better. Examples of Breadbox in action Here is an example of using the Breadbox splashpage. Initially for my HelixPowered product I threw up a simple splash page, added a catchy tagline and posted to a relevant community (Designer News in this case). In less than 24 hours, the site had just over 600 visitors and 197 email signups (so about a 30% conversion). While I know many people dislike product pages asking for emails with limited (or no) explanation or information, the thing is, it does work. And it’s a good way to test and validate an idea before investing too much time and money on a product that may not have a solid market. Haters gonna hate, so it’s up to you whether you choose to use this method. What I did was use the initial splashpage, receive some email sign ups, send out an email, get some feedback, then when I had input on the product (and validation) I built out an extended version of the landing/marketing page for HelixPowered. IMHO, this is a great early path to starting a new product. Another example of Breadbox in action, the splashpage for CourseMakers (which now has a full marketing page), an online teaching platform for designers and developers to build and sell digital classes. Go forth and validate I hope this helps in some way, and if you have ideas to validate maybe Breadbox will help you come to a conclusion sooner than later. I’d love to hear how you decide which projects to work on, and if anyone does use Breadbox to validate an idea post a comment to share it with everyone!
OPCFW_CODE
Simple process for backing up all files on a Macbook running OS X 10.4 I need a simple way to backup the contents of my girlfriend's Macbook hard drive. The Macbook is running OS X 10.4.11 - Tiger I don't need to be able to restore the entire OS Install. I just need to make sure that I can get access to saved files. I don't want to hunt through my hard drive now to find all the things that might be important later. I plan to back up to to an external hard drive. Can I follow the instructions for backing up using Disk Utility? (Even though I'm using 10.4): Once the backup is complete, will I be able to easily mount the backup image to access files without restoring the entire thing back to the Macbook's internal hard drive? Is there another (simpler) way to do it? The document you link to described four methods. One is explicitly only available on OS X 10.5 and later, the second one creates a full image of the hard disk, the next one requires you to subscribe to a service and the last one is manual backup of selected files. All of this can be gathered from skimming the document. It's not clear what exactly you are referring to, and why you think it might work for your situation. possible duplicate of Backup tool for Mac OS X? In essence, use Carbon Copy Cloner, and you will be fine. You can boot from this backup as well, but you don't need to of course. You might want to look at SuperDuper (in particular version 2.7.1). SuperDuper is Backup software specifically designed for OSX, and though it is commercial software, the free version is sufficient for what you are doing. It will copy the entire contents of your internal hard drive to an external drive. You will be able to read and write files on the external just by connecting it via USB or Firewire. SuperDuper does it's best to make the external drive Bootable, but this is not always possible, based on the files that you have loaded onto it and the specifications of your external drive. I'm using crashplan to backup my macs, with the free version you can backup to "local" destination as well as to distant ones you own (i.e. you can backup to your own mac as long as both are connected to internet). What I appreciate is that once setup everything is automatic and you don't notice anything, as soon as a destination becomes reachable the backup starts... Quoted from their website: Free Onsite & Offsite Computer Backup Truly dependable computer backup means backing up to multiple locations - not just online - which until now could be complicated. CrashPlan automatically backs up to multiple destinations for FREE! CrashPlan's groundbreaking social backup concept makes it easy to back up to computers belonging to your network of friends or family for offsite backup, in addition to using your own computers and external drives for onsite backup. CrashPlan works on all your computers, so you don't have to worry about compatibility either. CrashPlan is backup the way it was meant to be: uncomplicated, reliable and even a little fun to use.
STACK_EXCHANGE
This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out! The Driver’s Present State Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty! Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. The Future Potential This driver’s upstreaming will be a gigantic effort for sure, but that is definitely the goal, and the benefits will also be sizeable. Even as-is, this driver has way more potential. Not unlike a British policeman, the Linux kernel checks the license of every kernel module it loads, and limits the APIs it can use if it isn’t GPL-licensed – which the previous NVIDIA driver wasn’t, as its open parts were essentially a thin layer between the kernel and the binary drivers, and thus not GPL-licenseable. Because this driver is MIT/GPL licensed, they now have a larger set of interfaces at their disposal, and could integrate it better into the Linux ecosystem instead of having a set of proprietary tools. Debugging abilities, security, and overall integration potential should improve. In addition to that, there’s a slew of new possibilities opened. For a start, it definitely opens the door for porting the driver to other OSes like FreeBSD and OpenBSD, and could even help libre computing. NVIDIA GPU support on ARM will become easier in the future, and we could see more cool efforts to take advantage of what GPUs help us with when paired with an ARM SBC, from exciting videogames to powerful machine learning. The Red Hat release says there’s more to come in terms of integrating NVIDIA products into the Linux ecosystem properly, no stones unturned. You will generally see everyone hail this, for good reasons. The tradition is that we celebrate such radical moves, even if imperfect, from big companies – and rightfully so, given the benefits I just listed, and the future potential. As we see more such moves from big players, we will have a lot of things to rejoice about, and a myriad of problems will be left in the past. However, when it comes to openness for what we value it, the situation gets kind of weird, and hard to grapple with. Wait, What Does Openness Mean? Openness helps us add features we need, fix problems we encounter, learn new things from others’ work and explore the limits, as we interact with technology that defines more and more of our lives. If all the exciting sci-fi we read as kids is to be believed, indeed, we are meant to work in tandem with technology. This driver is, in many ways, not the kind of openness that helps our hardware help us, but it certainly checks many boxes for what we perceive as “open”. How did we get here? It’s well-known that opening every single part of the code is not what large companies do – you gotta hide the DRM bits and the patent violations somewhere. Here, a lot of the code that used to reside in the proprietary driver now runs on a different CPU instead, and is as intransparent as before. No driver relies as much on binary blob code as this one, and yet only semi-ironically, it’s not that far from where it could technically get RYF-certified. It’s just that the objectionable binary blobs are now “firmware” instead of “software”. The RYF (Respects Your Freedom) certification from the Free Software Foundation, while well-intentioned, has lately drawn heat for being counterproductive to its goals and making hardware more complex without need, and even the Libreboot project leader says that its principles leave to be desired. We have been implicitly taking RYF certification as the openness guideline to strive towards, but the Novena laptop chose to not adhere to it and is certainly better off. We have a lot to learn from RYF, and it’s quite clear that we need more help. From here – what do we take as “open”? And who can help us keep track of what “open” is – specifically, the kind of openness that moves us towards a more utopian, yet realistic world where our relationship with technology is healthy and loving? Some guidelines and principles help us check whether we are staying on the right path – and the world has changed enough that old ideas don’t always apply, just like with the cloud-hosted software loophole that proves to be tricky to resolve. But still, a lot more code just got opened, and this is a win on some fronts. At the same time, we won’t get where we want to be if other companies decide to stick to this example, and as hackers, we won’t achieve many of the groundbreaking things that you will see us reach with open-source tools in our hands. And, if we don’t exercise caution, we might confuse this for the kind of openness that we all come here to learn from. So it’s a mixed bag. Still Haunting Our Past A Bit As mentioned, this driver is for 2000 RTX series and beyond. Old cards are still limited to either the proprietary driver or Nouveau – which has a history of being hamstrung by NVIDIA. Case in point: in recent years, NVIDIA has reimplemented vital features like clock control in a way only accessible through a signed firmware shim with closed API that’s tricky to reverse engineer, and has been uncooperative ever since – which has hurt the Nouveau project with no remedy in sight. Unlike with AMD helping overhaul code for the cards released before their open driver dropped, this problem is to stay. From here, Nouveau will live on, however. In part, it will still be usable for older cards that aren’t going anywhere, and in part, it seems that it could help replace the aforementioned userspace libraries that remain closed-source. The official NVIDIA release page says it’s not impossible that Nouveau efforts and the NVIDIA open driver efforts could be merged into one, a victory for all, even if a tad bittersweet. Due to shortages, you might not get a GPU to run this driver on anyway. That said, we will recover from the shortages and the mining-induced craze, and prices will drop to the point where our systems will work better – maybe not your MX150-equipped laptop, but certainly a whole lot of powerful systems we are yet to build. NVIDIA is not yet where AMD and Intel stand, but they’re getting there. [Tux penguin image © Larry Ewing, coincidentally remixed using GIMP.]
OPCFW_CODE
Hey everybody, and welcome to /r/create_a_reddit! Just want to let you know about about a few rules. All posts must be about subreddits that you want to see created. Please use the appropriate flair when posting [Idea][Help][Team Up][Discussion] Please no irrelevant post or spam. They will be removed, and repeated offenses will get you banned D: And that's it! I hope this sub will be able to help you out! Best of luck, and DFTBA! Hi! I've been playing with this new app byte, it's really fun. Can someona please create a reddit to talk about it and discuss new ideas on how to make better bytes and share or whatever??? that would be so niceeeeeç I'll leave a link here to the apps site http://www.byte.co/latest please be cooooollll Moderators often step outside their role to moderate a sub an instead kill posts they don't agree with, arbitrarily ban individuals to a host of other infractions that harm the reddit community as a whole. This is the sub to report mod actions and investigations of subreddit behaviors to establish a pattern. Redditors need a forum to air greevences with mods that they cannot in the communities the offenders moderate. I had the idea while watching an old Vsauce video. So, basically we'll have a thread for each color, with each user contributing an adjective or something to describe the thread's color. That's the base idea, for flairs, I guess would show how many words you've given that sufficiently describe a color. So I know there's a few subs out there that are out there to report spammers, so the reddit community knows about them and can be on a look out, but what about a sub that is created to report users that are liars! That way, people know who they are, and instead of just believing them, they can either question them, or just straight up block them. Of course, you would need to show proof, but what do you guys think of this idea? Hey guys, so if you didn't know, I'm going into the USAF, and the job I'm going into is either TACP or CCT. And I really want to create a subreddit about those two. If you have any information, and you know what it is, comment below, and maybe we can team up and make it! Let me know!! So, I have an idea that I think would be awesome for a sub, but I just don't have time to make it or maintain it, so I think it would be great if somebody else made it. It basically would be a compilation of phrases that men/women usually say. Those phrases would then have the translation of the same sex, and also a translation of the opposite sex. Here's an example: Woman: It's fine. Man's translation: It's fine. Woman's translation: Everything is not fine. The word that usually ends the argument when the woman knows she's right and you need to shut up. What do you guys think? Somebody want to create this sub?
OPCFW_CODE
I have a Windows 2003 domain with two sites and two domain controllers. Recently, at one of the sites I started getting the following error when trying to add a computer to the domain "Logon Failure: The target account name is incorrect." I've also noticed that from a number of computers (not from all) in the same site, that I cannot browse file shares - I get the same error message. Examining the error logs on the local DC (at the site with the problems) - I have a number of error events. In the application log I have a high occurrence of Event 1053: Windows cannot determine the user or computer name. (The target principal name is incorrect. ). Group Policy processing aborted. In the system log I have a high occurrence of Event 4: The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/eagle.detect.local. The target name used was cifs/eagle.detect.local. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (DETECT.LOCAL), and the client realm. Please contact your system administrator. (The server and target name change often - host/ cifs/ dns/ ldap/ and I get it for both of my servers, falcon and eagle both with and without FQDN) In the Directory Service Log I have a high occurrence of events 1865, 1311, and 1566: The Knowledge Consistency Checker (KCC) was unable to form a complete spanning tree network topology. As a result, the following list of sites cannot be reached from the local site. The Knowledge Consistency Checker (KCC) has detected problems with the following directory partition. There is insufficient site connectivity information in Active Directory Sites and Services for the KCC to create a spanning tree replication topology. Or, one or more domain controllers with this directory partition are unable to replicate the directory partition information. This is probably due to inaccessible domain controllers. and Event 1566 All domain controllers in the following site that can replicate the directory partition over this transport are currently unavailable. I have been experiencing some WAN slowness over the last couple of days but not sure if this is causing part of the problem. I also recently had a power outage at the failing site. I have sustained these failures in the past and didn't have these problems. I can ping all of the involved machines to and from each other. I can browse files by IP but not by name or FQDN. I can also RDP to and from these servers (by name) with no problems. In doing some searching here on EE and also Google, Microsoft, etc. It sounded like I might have multiple SPN entries for the server(s). I ran setspn -L on both DC's and I guess I'm not sure what I'm looking at to see if there is a problem - they look "normal" to me. (I can send results on request) One suggestion was to dis-join the failing DC and re-join but I don't really want to do that If I can avoid it. Does anyone out there have some suggestions or know where the root cause might be? Many thanks in advance,
OPCFW_CODE
Summary: Financial Messaging Services Bus (FMSB) is a vertical industry implementation of Microsoft’s Enterprise Service Bus Toolkit 2.0 on top of BizTalk Server 2009 and BizTalk Accelerator for SWIFT. FMSB greatly improves time to market for many complex integration solutions especially in Banking and Capital Markets industries. This paper explains the rationale behind FMSB creation, provides a high%u2010level description of FMSB architecture, and discusses how FSMB is used to simplify application connectivity to SWIFT. FMSB helps software developers and solution architect by providing components and functionalities within engine which saves development time and give more value from the engine itself. This document assumes the reader has a basic understanding of generic ESB concepts. For further reading on the Microsoft ESB Toolkit, refer to: http://www.microsoft.com/soa/solutions/esb.aspx Lets use this link: http://msdn.microsoft.com/en-us/biztalk/dd876606.aspx Financial solutions (but this apply to any industry solution at the end), especially those implementing or leveraging a full-fledged messaging framework , can in fact constitute a foundation platform for the development of a specific application domain (payment or capital markets, government solution, manufacturing,..) where integration technology, data transformation, workflow management and other intermediary services are used to orchestrate transaction flows among different systems running on different, heterogeneous platforms. In addition to messaging, some commonly used services implementing specific behaviors are required for transaction processing, e.g., validation, routing, exception management, and repair. By developing these mechanisms as reusable services, the messaging infrastructure becomes more than an integration framework – it takes on the nature of a bus architecture where the lifecycle of a transaction can be mapped, calling the appropriate services as necessary. This is the essence of the FMSB (ESB): taking common processing services, abstracting and bundling them as reusable services that can be configured at implementation time and track execution KPI data as well as custom defined KPI. In addition, such services can be exposed to third party applications to leverage the preconfigured processing of the bus components, therefore enhancing client value. When defining FMSB, it is important to note that these services are business services, as defined by Microsoft Enterprise Service Bus (ESB) 2.0. The over%u2010arching concept is to use ESB to orchestrate all services and reuse them as needed. The basic architectural elements of a financial services application can be categorized into the following segments or layers as shown in Figure 1. Figure 1: Financial Application Architecture When considering the architecture of a financial services application, FMSB sits in the layer known as “Business Process and Orchestration”, which is covered by the Microsoft technology stack, and provides integration, orchestration, transformation, and workflow services. FMSB can be deployed directly to a financial institution client infrastructure project, but also embedded in a Microsoft partner application solution. FMSB and Microsoft ESB The FMSB is built principally on BizTalk Server because many of the services are implemented using BizTalk and Accelerator for SWIFT components. To conform to BizTalk ESB architectural best practices, the FMSB was developed upon the BizTalk ESB Toolkit. Most of the FMSB components are very generic and reusable for any solution built on ESB Toolkit. The base ESB architecture is also the base architecture for FMSB as shown in Figure 2 Figure 2: FMSB and ESB Financial Messaging Service Bus extends ESB by providing: - Resolvers which simplify solution creation by implementing support for: - Multipart messaging (Read Message Part, Replace Message Part) - Retrieve configuration from Dashboard (FMSB Value) - Retrieve complex configuration data for SWIFT service (SWIFT Service) (not covered as part of this post) - Storing itinerary designer value into itinerary runtime - Loopback adapter (message doesn’t leave message box) - Configuration model for defining BAM tracking data for service/itinerary execution - Service Broker Orchestration implementation - Silverlight Dashboard (built on Composite framework (previously known as Prism)) with 5 modules - Set of Financial services and itineraries together with configuration model (not covered as part of this post) FMSB Architecture is presented in the following figure: Figure 3: FMSB architecture as add-on for ESB (red circles represent FMSB add-ons) Core extensions of ESB (enhanced runtime, tracking KPI during itinerary execution) Extended Exception handling (support for invoking pre-defined exception itinerary) Loopback adapter (helps to mix messaging and orchestration services inside same itinerary) Configuration database (new fmsb resolver extends BRE and UDDI resolvers and provides generic configuration on the Silverlight dashboard) Silverlight self-service dashboard (for monitoring Live Data, view KPI Reports, Configuring KPI (BAM), SWIFT service administration) Interact and FileAct (specific SWIFT SAG adapters) support (not covered as part of this post) FMSB has several modules which could run and be installed separately: CORE modules – those modules could be reused without SWIFT modules. Artifacts include resolvers, adapter, orchestration service broker, database, entity framework models.. SWIFT modules – those modules are connected with BizTalk Accelerator for SWIFT (A4SWIFT) and pre-built for reusing in SWIFT scenario. SWIFT modules use Core modules. Requires A4SWIFT and BizTalk SWIFT adapters installed. (not covered as part of this post) Tracking modules – these module provide enhanced tracking capabilities over ESB and can be installed independently of other modules. They require the BAM infrastructure. Dashboard – modules for presenting Silverlight experience for working with Dashboard capabilities and configuration model. Independent of other modules. Following Figure present the relation of all FMSB modules. Figure4: Core + Tracking + Dashboard modules with configuration stores Need for rich BI within ESB Like an ocean surrounding an iceberg, business performance management (BPM) provides the business context for performance dashboards, which are layered applications built on a business intelligence and data integration infrastructure (i.e., the base of the iceberg). The most visible elements of a performance dashboard are the scorecard and dashboard screens, which display performance data using leading, lagging, and diagnostic metrics. In custom implementation extracting relevant business data for Dashboard isn’t an easy task. By using ESB architecture ESB runtime (Dispatcher in messaging scenario and Advance method in Orchestration scenario) has a full control of every message flow inside ESB runtime. But, even with all this knowledge ESB2.0 doesn’t provide full tracking feature. With pure ESB runtime you can’t extract reports which give you answers on common questions: How many itineraries/services worked in past ? How many itineraries/services currently running ? In any financial services application, it is very common to provide an answer to questions like the following: How many payments have been processed today? How many exceptions did we have? How many were urgent requests? How were today’s payments cleared? How many were bulk payments? How many were wire payments? Domestic vs cross-border? Who were our top 5 customers today? What percentage of the total came from these 5 These are the main reasons why we enhanced the tracking capability of ESB toolkit. Enriched ESB runtime (with FMSB assemblies „..V1.dll“) now has a capability to extract data by using new BAM interceptor. These data include: Itinerary data (start time, end time, name, version) Service data (start time, end time, business name, status,…) KPI inside Message body (user configure) Interceptor will extract Itinerary and Service data from Itinerary header. KPI inside Message body would be extracted according to the configuration model and stored into BAM. Administration of the system would define which itineraries and services should be tracked (ESB Itinerary DSL model), how to extract KPIs from message body, which services/itineraries and define tracking entity (Activity with Checkpoints analogy from BAM). See the screenshot below With FMSB, configuration of KPIs isn’t done inside Excel (nor custom XML). Administrator of the system (or business person) could use Dashboard with Drag/Drop functionality to define all necessary data (Activities, Checkpoints, Cubes, Measures, Dimensions) together with service position where this data should be tracked. This model is published in: Configuration model for tracking BAM star schema to persist tracked data. During the runtime tracking Interceptor would read Configuration model and extract data from Message body as defined. Dashboard presents visualization of the cubes inside SQL Analysis services. This is generic tool and could be re-used for any cube inside Microsoft SQL Analysis services. Sources – Cubes from SQL Analysis services (OrderDocumentSource) Measures – Defined measures for selected Cube (CountOf) Dimensions – Defined Dimensions for selected Cube (CustomerName, RequestType) Filter – Dimension for filtering (same as Dimension). Dashboard provides several pre-defined type of reports (Column, Line, Pie, Bar, Area, Doughnut, Point, StackedArea) for any source. Following is the sample of Column report: By selecting different type of report view would be redrawn with the same data. FMSB installation creates BAM cubes for: - Storing Itinerary/Services Those cubes provides source data for report like bellow (percentage of service execution): - Storing Itinerary/Services as Real Time Aggregation for live service view on the system LiveData view present IT real view on the system. See the screenshot below for details Current Itineraries working status on the system – „How many itineraries currently working?“ Current Itineraries status – „Status of itineraries on the system?“ Current working services status – „How many services currently working ?“ Note: BAM system store data into BAM RTA Cubes with delay. The above data provides great insight to IT administration to know exactly what the current system/service/itinerary load is. Important Note: The Tracking architecture extension and the dashboard feature in FMSB are designed to be generic and can be used for any BizTalk ESB toolkit implementation. It is not restricted to financial services vertical. If the BizTalk implementation does not warrant the need for ESB toolkit the dashboard can still be used to view BAM data more visually. FSMB provides great set of ESB add-ons. With core functionalities benefits are available either for developers or for business persons or for solution architect. By re-using ESB and BizTalk runtime FMSB provides solid foundation for any specific domain development.
OPCFW_CODE
Ocean Protocol offers 3 types of pricing options for asset monetization. The publisher can choose a pricing model which best suits their needs while publishing an asset. The pricing model selected cannot be changed once the asset is published. The price of an asset is determined by the number of Ocean tokens a buyer must pay to access the asset. When users pay the right amount of Ocean tokens, they get a datatoken in their wallets, a tokenized representation of the access right stored on the blockchain. To read more about datatoken and data NFT click here. With the fixed price model, publishers set the price for the data in OCEAN. Ocean Market creates a datatoken in the background with a value equal to the dataset price in OCEAN so that buyers do not have to know about the datatoken. Buyers pay the amount specified in OCEAN for access. The publisher can update the price of the dataset later anytime. A FixedRateExchange smart contract stores the information about the price of the assets published using this model. Publishers can choose this fixed pricing model when they do not want Automated Market Maker(AMM) pools to decide the price discovery. If the publisher has already analyzed and estimated the worth of the dataset and is ready to sell an asset at a constant price, this is the suitable pricing model. The image below shows how to set the fixed pricing of an asset in the Ocean’s Marketplace. Here, the price of the asset is set to 10 Ocean tokens. With the dynamic pricing model, the market defines the price with a mechanism derived from Decentralized Finance (DeFi): liquidity pools. While the publisher sets a base price for the token in OCEAN, the market will organically discover the right price for the data. This can be extremely handy when the value of the data is not known. The Ocean Market helps create an Automated Market Maker(AMM) pool of Datatoken and Ocean tokens in dynamic pricing for each asset. AMM enables unstoppable, decentralized trading of assets in the liquidity pool. AMM uses a constant product formula to price tokens, which states: x * y = k where x and y represents the quantity of the two different tokens in the pool and k is a constant. A liquidity pool is a reserve of tokens locked in the smart contract for market making. A buyer or a seller of an asset exchanges token x for token y or vice versa. AMM calculates the exchange ratio between the tokens based on the mathematical formula above. Ocean Protocol facilitates the creation of Datatoken/OCEAN liquidity pool with Balancer smart contracts. The publisher needs to only approve a blockchain transaction that creates an AMM while publishing the asset. Thus, Ocean Market hides the complexities of deploying an AMM pool. While publishing an asset with dynamic pricing, the publisher decides the initial ratio of Datatokens and Ocean tokens in the pool, thus setting the initial price of an asset. The price of an asset is later dependent on the pool’s liquidity and the price impact of trade in the pool. Publishers can set the pricing model of an asset to Dynamic pricing if they want the market to decide the asset price and thus enable auto price discovery. The image below shows how to set the Dynamic pricing of an asset in the Ocean’s Marketplace. Here, the asset price is initially set to 50 Ocean tokens. Ocean Protocol also allows publishers to set the pricing using ocean.js and ocean.py library. With the free pricing model, the buyers can access an asset without requiring them to pay for it except for the transaction fees. With this pricing model, datatokens are allocated to the dispenser smart contract, which dispenses data tokens to users for free whenever they are accessing an asset. Free pricing is suitable for individuals and organizations working in the public domain and want their datasets to be freely available. Publishers can also choose this model if they publish assets with licenses that require them to make them freely available. The image below shows how to set free access to an asset in the Ocean’s Marketplace.
OPCFW_CODE
Dec 13, 2019 How to change OpenVPN ports in IPVanish – IPVanish OpenVPN port for (Fire TV/Stick) 1. To change OpenVPN ports for Fire TV/Stick, click on the Settings icon on the top right. 2. Select Port from the menu. 3. Select your preferred port from the list. If you have any questions about how to change the OpenVPN port in our app, please contact the support team. Nginx as a Reverse Proxy for OpenVPN (TCP 443) Dec 13, 2019 Oct 30, 2014 · OpenVPN is running as a daemon using port 1194 just fine. But as soon as I change it to tcp 443 the OpenVPN client logs show that the connection is refused. Thu Oct 30 22:09:58 2014 us=442688 TCP: connect to [AF_INET]X.X.X.X:443 failed, will try again in 5 seconds: Connection refused (WSAECONNREFUSED) Apr 22, 2018 · If you don’t have an obfuscation server, then leave 443->443. The port 25 will point to the PI’s SSH port 22. This is only for my own convenience. In case I want to access the OpenVPN server directly without the obfuscation proxy, I have created a rule 444->443. The service port is the OUTSIDE port that will be used with your PUBLIC IP udp 1194 - default openvpn port. Choosing a port. The default port in the above configs is TCP port 443, this was choosen bec ause of it's ability to pass through nearly any firewall, but it is slower than a UDP port will be. UDP Ports: Dec 08, 2019 · OpenVPN Over TCP Port 443 Another way of hiding your OpenVPN connection from the prying eyes of Egypt’s DPI is to use Transmission Control Protocol (TCP) port 443, which is the port used by HTTPS. TCP port 433 is unlikely to be blocked, even in Egypt, as this is the port which is relied on by online banking, online retail, and any website TCP ports 502, 501, 443, 110, and 80; L2TP uses: UDP ports 500, 1701, and 4500; IKEv2 uses: UDP ports 500; PPTP uses: TCP ports 1723 or Protocol 47 (GRE) If you can connect over any of those, you should be able to use at least one of our connection methods. In addition, the PIA application pings our gateways over port 8888. This is used to TCP ports 502, 501, 443, 110, and 80; L2TP uses: UDP ports 500, 1701, and 4500; IKEv2 uses: UDP ports 500; PPTP uses: TCP ports 1723 or Protocol 47 (GRE) If you can connect over any of those, you should be able to use at least one of our connection methods. In addition, the PIA application pings our gateways over port 8888. This is used to Aug 27, 2016 · The default port and protocol for OpenVPN is UDP/1194. Some server admins may block port 1194 so to get around this we can set OpenVPN to listen on port 443 instead. Port 443 is the default for HTTPS traffic so there is little chance it will be blocked. Where things get interesting is that SSL uses the TCP protocol on port 443. OpenVPN, which is built on OpenSSL libraries, can be configured to run TCP on that same port. Many VPN providers let you do this. When a VPN uses OpenVPN TCP on port 443, any data sent over the connection looks like regular website SSL traffic, not VPN traffic. To set this up, configure an OpenVPN server to listen on TCP port 443, and add a firewall rule to pass traffic to the WAN IP (or whatever IP used for OpenVPN) on port 443. There are no port forwards or firewall rules required to pass the traffic to the internal IP. In the custom options of the OpenVPN instance, add the following: Apr 18, 2019 · Specifically, I am using OpenVPN to connect to Toronto Private Internet Access server on port 443. This was recommended by PIA. The connection correctly establishes, but, then traffic halts after a minute of operation. The incoming traffic just halts then an expiry timer goes off and the connection closes. OpenVPN is TLS-based and uses the standard TCP 443 port. To switch to OpenVPN, go to the "point-to-site configuration" tab under the Virtual Network Gateway in portal, and select OpenVPN (SSL) or IKEv2 and OpenVPN (SSL) from the drop-down box.
OPCFW_CODE
Great internet speed but several services almost unusable I've had some issues with my PC for the last few months, and not sure what else to do. Some information: I have 1GB download speed PC has 96GB ram, Intel i9 9900k, GeForce RTX 2070 super I have problems with YouTube (hangs while buffering every 10 seconds, even on the lowest video quality) I am constantly disconnecting/reconnecting to Google Docs and Sheets. I'll write something and it will disappear and refresh the connection. This happens literally every 10 seconds Download speeds are horrific. Takes ages to download relatively small files I can stream twitch just fine, and all video games work seamlessly This happens on both ethernet and my WiFi adapter Happens in all web browsers Happens on all accounts on my PC (even a freshly created one) All other devices in the house are completely fine No VPNs running Windows defender off, no other firewalls I'm a software engineer, and I run react-native servers. But even when I restart my computer and make sure all node processes are killed, problem persists The only things I've done in my BIOS are disable virtualization for Valorant's Vangaurd Network usage is 0% in task manager I'm at a complete loss. I've been a Software Engineer for 15 years, but my network experience is limited. Any suggestions for what else I can do the diagnose this would be much appreciated. Thank you in advance! I'd have a look at https://www.bufferbloat.net/projects/ for starters - see if you can eliminate it being your network hardware/configuration. [A mid-2k Pentium is fast enough to get full throughput on a 1gbps connection [I have one here I use as my gateway to the whole building], so we ought to be fairly safe in assuming it's not intrinsic to your hardware, it's a configuration …somewhere] Thanks for the reply, @Tetsujin. I got this result for my phone: https://www.waveform.com/tools/bufferbloat?test-id=1cfd88e7-93a4-49f5-8f18-a6480e8c6833 My PC is stuck on "Warming Up" for the download step. Waited for about 30 minutes with no luck. It won't even finish the test Presumably the phone is on WiFI - try the desktop both wired & Wifi & see how it compares. [phone results are absolute pants, btw… but I guess you knew that already ;)) Wow, PC passed with flying colors on WiFi: https://www.waveform.com/tools/bufferbloat?test-id=dbc99de7-a4ce-454f-b66b-7f7b1f8234ec I just realized that my WiFi adapter was plugged into a 4-way splitter, which definitely could have been why it was so slow before. I guess that means it's either: My motherboard / network hardware The LAN ports on the router? Would really like to get this fixed for ethernet. Either way, thanks for at least getting me this far! I can't be specific, but bufferbloat has some useful workthroughs to help, as does DSLReports.com It's really worth trying anything they suggest & see what combo works. You're still a long way short of what you're paying for. This is my result on a 200/20 line. I'm getting pretty much what I'm paying for [& this is peak time locally so contention ratios are high] https://www.waveform.com/tools/bufferbloat?test-id=2f8a329e-f483-4f91-adf3-f7af96ad7f42 Thank you so much. You have no idea how much this has been bothering me. This was incredibly helpful. I'll take all this info and investigate further. Wired connections should be at least as fast as Wi-Fi - anything else indicates a problem with the wired connection. You could try using a different port on the router or replacing the cable with a different one. I don't like "just use wi-fi" as a solution, because wi-fi sucks, relatively speaking. Yep I agree... unfortunately I've tried every port and multiple cords
STACK_EXCHANGE
Calculating the time of an Insert Sort and a Merge Sort I am taking a class in programming and have started algorithm analysis. The question given is to calculate the sorting time for an insert sort and a merge sort of 1e6 and 1e9 numbers given that both take 1 second to sort 1e3 numbers. I am not sure if I understand time complexity fully, but since Insert Sort has O(n^2) and merge sort has O(n log n), this is how I am thinking: If using insert sort takes 1 second to sort 1e3 numbers, increasing the amount of numbers to sort by a factor of 1e3, then the time increases by a factor of 1e3^2, or 1e6 seconds. The same goes for sorting 1e9 numbers, we increase the sorting time by a factor of 1e6^2, or 1e12 seconds. Am I thinking about this correctly? As for the merge sort, if sorting 1e3 numbers (roughly 2^10) takes 1 second, sorting 1e6 numbers (roughly 2^20) increases the sorting time by a factor of 2^20 * 20, or roughly 2e7 seconds. Sorting 1e9 numbers (roughly 2^30) increases the sorting time by a factor of 2^30 * 30, or about 3.2e10 seconds. Is this correct? As I stated, I am not sure I understand time complexity, so if this is wrong, how am I supposed to think about this? You know what is the time complexities of those algorithms but didn't they teach you the definition of time complexity before that? I'd be careful with the sentence "If using insert sort takes 1 second to sort 1e3 numbers, increasing the ammount of numbers to sort by a factor of 1e3, then the time increases by 1e3^2, or 1e6 seconds". The time is multiplied by a factor of 1e6. Although it does result in a total of 1e6 seconds, your formulation is misleading and I suggest correcting it. @Stef Fixed that now. English is not my native language, so some grammatical errors are bound to pop up, haha. @mangusta Well, they have taught us the definitions. I am just not sure that I completley understand how it is used, as in this example. Insertion sort Your reasoning for insertion sort is correct. I'd be careful with the sentence "If using insert sort takes 1 second to sort 1e3 numbers, increasing the amount of numbers to sort by a factor of 1e3, then the time increases by 1e3^2, or 1e6 seconds". The time is multiplied by a factor of 1e6. Although it does result in a total of 1e6 seconds as you correctly stated, your wording is misleading and I suggest correcting it. Merge sort As for merge sort, the calculation is slightly more complex because of the logarithm. Imagine there is a constant k such that the execution time of merge sort is always exactly k * n * log(n), where n is the number of elements to be sorted. You are given: k * 1e3 log(1e3) = 1s. You want to figure out the value of k * 1e6 log(1e6). The good news is that by properties of logarithm, log(1e6) = log((1e3)^2) = 2 log(1e3). Therefore k * 1e6 log(1e6) = k * 1e6 * 2 * log(1e3) = (2e3) * (k * 1e3 * log(1e3)). Thus the running time of merge sort on an entry of 1e6 elements is 2e3 seconds. The reasoning for 1e9 is the same as for 1e6, so I will let you find out by yourself. Sanity check Insertion sort runs in time proportionate to n^2. Merge sort runs in time proportionate to n log(n). Logarithm is a very slow function; when n starts getting large, the running time of merge sort is much shorter than the running time of insertion sort. Your initial answers were: on an entry of 1e6 elements, insertion sort takes 1e6 seconds and merge sort takes 2e7 seconds. This cannot be correct. 2e7 seconds = 20,000,000s is 20 times longer than 1e6 seconds = 1,000,000s! Thanks, that makes sense!! So, following the same reasoning I get that 1e9 numbers the time increases by a factor of 3e6. Sounds about right?
STACK_EXCHANGE
When you’re looking at your business performance, you need to know your market equation to decompose all the effects that results of the raise or the fall of your activity and be able to understand where you should act to improve. The simplest example of a market equation will be your company revenue, which is just an amount of money that results from a number of items sold at a certain price, but this could be completed by numerous elements as for example number of distributing points or number of items available at the distributing point…and this could be done for all type of performance, your number of incidents, your conversion rate… A market equation could be really powerful to analyse a performance and properly split the reason of your success or your fall but should also follow some specific rules. 1. What is a market equation? A market equation, which could also be called “indicators relation”, is the composition of a performance indicator. Using mathematical words this could be written as: The beauty of writing it like this is that it allows you to translate all your secondary performance indicators into the real Key Performance indicators that actually matters, and when taking decisions based on business intelligence, avoid wasting time on things that actually don’t matter. Now, the good news is that it’s generally not really hard to write down this market equation when you know your business: the function “f” is usually a multiplication and the Performance Indicators are usually things you’re already following, but in separated views. The next chapters of this article goes a bit more into the details of simple multiplicative/ funnel approach but happy to chat about more complex examples. 2. What is the purpose of building a market equation? Using the same example as in the introduction of the article, here is a simplistic retail example: As a retailer I want to understand my Revenue. So, my main KPI is Revenue. And this could be split as: That makes a lot of sense, because then I have two underlying performance indicators (PI) that actually helps me split the different effect behind what I observe on my main KPI Revenue. First underlying PI is #ItemsSold (the number of products I sell), and the other is Revenue/ #ItemsSold, which is basically my unit price. Why do I want to split those? Well basically because it’s not the same team in charge of volumes and the one in charge of pricing. So, in order to be able to identify where I need to improve, I need to split those two effects. Break down your business effects Every business has a revenue composed by a specific series of steps and the same revenue could be build using different breakdown. As for example, a restaurant could be a number of meals multiply by an average price, you will just follow 2 effects. But for example, if you add some tables in your restaurant, it will be hard to compare your revenue between two period of time if you don’t follow the same amount of seat available in your restaurant. And you will have the same issue if you accelerate your service and try to serve 3 seats per meal instead of 2. Just using clients and price won’t allow you to follow all the underlying effects of your business. Using a market equation will allow you to separately compare your performance between two periods of time all things being equal. If we are now talking about industry, the number of incidents could also be the result of a market equation. If you follow the number of incidents coming from your industrial machine on a weekly basis, you won’t be able to compare your incidents if you doubled the number of machine in your scope. We will need to split your performance at least in two: number of machines and number of incidents. This will be true for almost every activity. 3. A conversion funnel is also a market equation An eCommerce website will follow his conversion funnel to understand his performance. This funnel will have the exact same composition and rules as a normal market equation. The website manager will measure the volume of users coming into the website, the number of sessions, the number of users going into a product page, the number of users going in the checkout page and finally the conversion of that checkout into a purchase . As a website owner, you all know the steps of your website and are used to measure the conversion at each step of the funnel in order to give mitigation actions to the right people. As for example, the volume of users going in your website is the responsibility of the acquisition manager, some other part of the funnel could be the responsibility of the conversion team, UX Team, marketing team, basket team, … or even a mix of responsibility. In a market equation format, this could give the simple equation below, which could be completed by any other useful steps of your conversion funnel. If we start exploring this equation, we could follow the conversion at each step of the funnel. - #Users: Volume effect – how many people are visiting my website? - #Product page visited/ #User: Product page conversion, ratio of people going from the landing page to the product page. - …on that users on the product page how many went on the basket page (step 3) and finally regarding the people that did all this way to the basket page how many really buy a product (step 4). I intentionally stopped the market equation at the volume of payments step, to play with you using this equation. A number of payments is an interesting indicator, but you might as well want to monitor the “revenue generated” or even the “conversion rate” or the “average basket”… using the same data and just playing with the ratio of our equation. Let’s now play with this equation: what should be added into this equation to get the market equation that will give you the revenue? You get it? Quite simple, you should just complete this equation by adding the total amount of revenue generated by this user. You can simplify this equation to make sure you are measuring the right indicator: This one is more complicated, because it’s not a simple indicator, it’s a ratio (€/user) composed of 2 indicators. to get the average basket we need to change the existing equation. We will need to remove the volume effect, the first step. Why? This is because an average basket is considering the average behavior of your users not the sum of all the users. Doing that, will allow us to have our ratio. You now should be able to do this exercise on your own. As a starting point you need to understand the indicator. What is the unit of this indicator? A conversion rate is the percentage of visitors to your website that complete a desired goal (a conversion) out of the total number of visitors. In our case it will be a Number of visitors that complete a purchase out of the total number of visitors in our website. Using our equation, we will have to remove the volume effect (#Users) and remove the “Revenue”: You are now familiar with the concept of the market equation and the adaptation you can do to a market equation to get the proper indicator. That say, you are not very advanced to do it on your own with your activity and your real data. 4. Analyse your performance with a tool How Datama helps to analyse your performance At Datama we build smart algorithms that analyse business performance, and our tool Datama COMPARE is doing market equation calculation and even more. The best way to analyse a market equation is to display a waterfall. Same graph as in part 3 below is an example of a waterfall graphs coming from Datama COMPARE and explaining the performance of an eCommerce website between 2 periods of time. I don’t know if you have properly observed the graph, but each step has been converted in the targeted indicator, Revenue in that case. Instead of comparing different type of things, Datama helps you to directly affect an impact using the same unit on each step. To go further! It’s interesting to compare the impact of each step, but it will be even more interesting to know what’s driving your performance to raise or to fall. And that’s where Datama COMPARE become really powerful. In that case, with data coming from Google Analytics, Datama will help you understand which levers to activate to boost your growth. Datama Demo can help you get idea of market equation you may want to build. And don’t hesitate to contact our Team, we will be more than happy to help on your project, even going in much more complex examples than what you have above.
OPCFW_CODE
A python script, to automatically generate the arguments for Joshua Wright's 'asleap' program. This video demostrates an offline (asleap) and online (THC-pptp-bruter) attack on MSCHAP v2 software VPN. Watch video on-line: Download video: http://download.g0tmi1k.com/videos_archive/asleap___THC-pptp-bruter.mp4 From wireshark (and a Man In The Middle attack), you can get "CHAP Challenge" and "CHAP Response". We can break theses values down: CHAP Challenge = Auth Challenge (16 bytes) CHAP Response = Peer Challenge (16 bytes) and Peer Response (24 bytes) After finding "Auth Challenge and Peer Challenge" we can add these to the username and hash (sha1) the result. This will generate the "Challenge". Once we have the challenge, we can feed this into the asleap, along with CHAP Challenge. This script does all the work for you (and more), it just needs the values from wireshark for it to work. As well as having the option for different styles of attack, you can either uses a dictionary/wordlist or use 'Genkeys' to generate a look up file for asleap (which is recommended). Also by using this, you can automatically run asleap with your arguments. - The script - chap2asleap.py Home Page: http://www.willhackforsushi.com/Asleap.html Download Link: http://www.willhackforsushi.com/code/asleap/2.2/asleap-2.2.tgz Home Page: http://freeworld.thc.org Download Link: http://freeworld.thc.org/download.php?t=r&f=thc-pptp-bruter-0.1.4.tar.gz Home Page: https://blog.g0tmi1k.com/ Download Link: [http://github.com/g0tmi1k/]]99 How to use chap2asleap.py - chmod 755 chap2asleap.py - python chap2asleap.py 1 2 3 4 5 6 7 8 9 10 11 12 13 Song: Two Fingers - Keman Rhythm Video length: 03:03 Capture length: 5:48 Blog Post: https://blog.g0tmi1k.com/2010/03/chap2asleappy-v011-vpn/ 2011-04-05 - v0.2 - [>] Updated
OPCFW_CODE
Best Reasons to Study Data Science in 2022 Today's data-driven environment makes learning and mastering data science essential for every organization, making it an attractive career option. Analytics is essential for digital agencies since the ultimate goal is generating meaningful and insightful insights from data and assisting organizations in leveraging their power. This post explains why data science, advanced analytics, and other artificial intelligence-related careers aren't only in demand now and in 2022. The number of job opportunities Many professions and functions are available to data scientists, including IT, healthcare, and security. Data engineers, data scientists, and prominent data managers can all be employed based on their skills. As data scientists provide economic output to enterprises in today's data-driven environment, they are a prominent career option. With many companies actively recruiting data scientists in recent years, data science offers a variety of networking opportunities. Positions vary in salary and are influenced by industry, location, and services. Brand building is enhanced by data scientists' use of data analysis methodologies. The value determines the data science specialists' pay they bring to the company. You can earn more money yearly by increasing your analytical skills through online classes, for instance. Demand is on the rise. Globally, data scientists are in high demand to improve data-driven activities as the digital world becomes more sophisticated daily. For commercial development, any major company should hire a data scientist capable of gathering, analyzing, and interpreting massive amounts of data. As a result, digital companies are constantly looking for data scientists with the appropriate data science abilities to ensure the success of their data analytics. Defeats the competition Studying data science is a rapidly expanding field. The demand for data scientists is greater than the supply, even though there are many professionals. Compared to other typical IT jobs, it is still a growing field. As a result, there is minor rivalry in data science, giving you a better chance to rise to prominence quickly. Given the current circumstances, there is a mismatch between demand and supply for data scientists, as evidenced by the fact that the number of data scientists is still minimal. You don't need face-to-face schooling if you want to learn data science and develop your skills in this subject. Many online data science courses are available, regardless of when or where you want to take them. Taking a Data science course allows you to push and reinvent yourself through a self-paced core curriculum that helps to learn data science at your own pace. Centre of Decision Making While data science is driven by various talents and responsibilities, being in a decision-making role gives you more opportunities to shine. Data scientists develop diverse abilities, from statistics to IT understanding. As a result, they are at the heart of critical decisions that will lead to better outcomes. You have valuable expertise that will help you establish your own business. You may learn data science to get satisfying work or develop your own business because you can obtain the necessary skills through an online data science education. This is one of the advantages of effectively applying data science. You can have a platinum pass to expand your skills and establish your start-up if you have a collection of diverse data science knowledge. Learning data science is a fantastic way to gain the technical abilities you'll need to contribute to company improvement significantly. Because data science is such an essential topic of study for company development, data scientists with specific knowledge and expertise can expect to advance quickly in their careers. Learning about cutting-edge technology To achieve success in data analytics, data science specialists must possess technical abilities, including applying cutting-edge technology. Data scientists are experts in analytics, computer science, communication, and visualization of data-driven insights. Therefore, data science encourages the development of technical skills and promotes the learning of emerging technologies such as artificial intelligence and machine learning. An exciting career path with a future focus It is a promising career path for anyone interested in learning more about data science. By understanding data science, you can put yourself in a powerful position. You will be able to advance in your career by learning data science and cutting-edge technology.
OPCFW_CODE
I have just updated from a 32bit version of OpenSuse 11.4 to the shiny new 64bit version of OpenSuse 12.1 and love it already. Unfortunately my Samsung ML-1665 printer seems unhappy! While using 11.4 I had installed the Samsung unified Linux driver 0.86 and the printer was very chirpy and printed well (not perfectly, but very well). Now I’ve installed 12.1 and have again installed the Samsung unified Linux driver 0.86 and the printer prints fine again except it prints everything in portrait mode. I have tried old and new Oo documents which are in landscape mode and they all print in portrait Can anybody help me out please? I have one of those, under 11.4 64bit + KDE 4.x. The first time I installed the samsung proprietary drivers I noticed that on non-kde-native apps, after their print dialog, would call the (quite ugly, tk-style) samsung print dialog, and that this settings would be applied over the app print settings. Later I found out I could prune most of the driver “goodies” and have the printer behave. If you think this may help you with your 12.1 issues see this post and perhaps a few before and after it. Thanks brunomci. I have reverted back to OpenSuse 11.4 for a while but I’ve installed the 64 bit version this time and still have the problem that the ML-1665 won’t print in landscape so I’m guessing that the problem is 64 bit related. I have tried adjusting all of the settings in YaST and in the unified driver to no avail, it even says “landscape” in the printer dialogue but still prints in portrait. I won’t be buying a Samsung printer again in the future This has nothing to do with 64-bit IMO. Did you try the removal of the “excess” driver complements as described in the posts I linked to? I’m quite sure that your settings in the SAMSUNG printer dialog (not KDE’s or LO’s) is specifying portrait, hence your problem. As a workaround, you can try printing to PDF from LO (which will rotate the page accordingly), open the PDF in OKULAR and print. And yes, you can have all the printer’s functionality after some tinkering, (something I didn’t have with an HP Laserjet P1005 under 11.3), but HP lasers are generally better integrated with linux via hplip. If you mean did I remove these: :/usr/lib/cups/filter> ls -l rastertosamsung* -rwxr-xr-x 1 root root 52824 Set 4 2009 rastertosamsunginkjet -rwxr-xr-x 1 root root 32752 Set 4 2009 rastertosamsungpcl -rwxr-xr-x 1 root root 56720 Set 4 2009 rastertosamsungspl -rwxr-xr-x 1 root root 147504 Set 4 2009 rastertosamsungsplc then yes, I did. But I don’t have the following folder: I have lots of other manufacturers but no Samsung! I removed the “excess” files listed above and now the printer won’t print anything at all If this problem is not 64 bit related then why did the printer print in portrait and landscape when I ran OpenSuse 11.4 32 bit on the same machine just two weeks ago using exactly the same driver and settings? It is only since I changed to the 64 bit version that the printer won’t print in landscape. Under OpenSuse 11.4 32 bit I didn’t have to “tinker” with anything to get perfect printing results every time. I don’t really want to have to export my Oo spreadsheet to PDF every morning in order to print off my list of customers for the day. I know that the ML-1665 used to work fine before without messing about I’m just confused as to why it now doesn’t even though I’ve set everything up like before. Every setting that I can find whether Samsung or KDE is set to landscape, and the Oo file printer settings are also set to landscape but still the printer prints in portrait Really appreciate all your time and effort brunomci Many thanks Well, I’m running it under 11.4 64-bit without issues. that’s why. Do go trough the thread I gave the link for, it was a work-in-progress - first getting the d*mn thing printing, then solving the twice-run packages, then removing the excess cruft, then making it cups-standard (if the samsung folder is not there you only need to create it), etc. a PITA. I’m not sure what your problem is, here it works OK. Perhaps there is a missing 32bit-compat dependency? Newly installed OSes don’t have much of these packages, it may be worth to check the driver requirements, if any, although I don’t remember needing any. Can you print a landscape document from a native kde app, like okular? If so, it’s an issue specific to LO (and probably firefox, etc.). Well, I’m running it under 11.4 64-bit without issues. that’s why. I tried printing in landscape from Okular which also printed in portrait so I re-installed Oo which changed nothing either so I re-installed Suse and the first thing I did once I had my fresh 64 bit installation was to install the unified Linux driver for the ML-1665 and low and behold I could print in landscape! Excellent I thought, and continued to install the Nvidia drivers and generally update the system. Once I had run the online update in YaST the Oo icon on my desktop changed and suddenly I’m back to printing everything in portrait … I give up So it would seem that I’m stuck in portrait for a little while, at least until 12.1 has settled enough for me to upgrade to that and then I’ll see if I have the same problem again. Thanks so much brunomcl for all your help Samsung ML-1665 Updated version software and driver link. Samsung ML-1665 Driver | Samsung Driver Downloads Hope this help…
OPCFW_CODE
RAyMOND is a patch to modify the N-body/hydrodynamics code RAMSES in order to be able to run simulations in MOND gravity. It includes both the fully non-linear AQUAL formulation of MOND, and the quasi-linear QUMOND formulation. For details see this paper. For a recent example of using the code for cosmological simulations, see this paper. For an even more recent example of using the code, this time to study the external field effect on cluster galaxies, see this paper. The patch may be downloaded as a zipped tarball by clicking this link. Note that for the moment this should be considered a beta release as this has not been tested with the most up-to-date version of RAMSES found on Bitbucket. There are actually two patches, one for QUMOND and one for the fully non-linear AQUAL formulation. Including the patch automatically switches the gravitational solver to that of QUMOND/AQUAL. For both patches an additional MOND_PARAMS section must be added to the namelist with the following parameters: - imond_a0: This is the MOND acceleration scale in m/s^2. This is converted to code units automatically. - mond_n: This is the exponent used in the MOND interpolation function which is hard-coded to be of the form x/(1+x^n)^(1/n) for AQUAL, or its inverse in the case of QUMOND (see Famaey & McGaugh 2012, eqs. 49 and 50). It is an easy matter to use a different mu (or nu) function: just change the appropriate line in the function "mu_function" (or "nu_function") at the end of poisson_commons.f90. Note that it's probably numerically more efficient to remove mond_n completely from the interpolation function if you are using the "simple" version where mond_n = 1. There is one additional patch-specific parameter for AQUAL runs: - maxstoredcells: This is a technical parameter used to set the size of an array that stores extra grid cell values required due to the enlarged numerical stencil of the non-linear solver. Setting this to about 20% of the value of ngridmax is usually more than enough. The QUMOND patch includes the capability of running cosmological simulations (the publically-available AQUAL patch has not been modified to allow this, but it is certainly possible to do without too much work). Therefore, in the case of a QUMOND cosmological run, there are two additional parameters to specify in the MOND_PARAMS section of the namelist: - mond_cosmo: This modifies the cosmological evolution of the MOND acceleration scale according to the prescription a0_cosmo = a0*a^mond_cosmo where "a" here is the scale factor. Thus for mond_cosmo = 0 there is no cosmological evolution (apart from that arising from the changing physical scales due to the cosmological expansion), whereas for mond_cosmo > 1 the MOND effect is further suppressed at high redshift. - mond_omega_m: This is to fix the particle mass independently of the value of omega_m read in from the cosmological initial conditions, thus ensuring a sensible background evolution with a much reduced matter content (as is typically assumed in a MOND context). While ad-hoc, it is necessary in the current absence of a full MOND cosmology. For further discussion of the issues regarding cosmological simulations in MOND, please see this paper. If you have questions or issues that you absolutely cannot solve after many attempts, please contact me at graeme.candlish at ifa.uv.cl.
OPCFW_CODE
public class LittleBank { private Queue queue1, queue2; private Teller teller1, teller2; public LittleBank() { LittleBankServiceOrganizer lbso = new LittleBankServiceOrganizer(); lbso.createBankEntities(); queue1 = lbso.getQueue1(); queue2 = lbso.getQueue2(); teller1 = lbso.getTeller1(); teller2 = lbso.getTeller2(); } public void customerArriveToQ1() { queue1.enqueue(); } public void customerArriveToQ2() { queue2.enqueue(); } public void endServiceTeller1() { teller1.endService(); } public void endServiceTeller2() { teller2.endService(); } public void showStatus() { System.out.println("----------------------"); System.out.println("Teller 1 status: " + (teller1.isFree()? "FREE": "BUSY")); System.out.println("Teller 2 status: " + (teller2.isFree()? "FREE": "BUSY")); System.out.println("Queue 1 size: " + queue1.getSize()); System.out.println("Queue 2 size: " + queue2.getSize()); } }
STACK_EDU
Go Agile and Stop Wasting Money on Strategy We pour a ton of money and work into figuring out what the future will be and what people will think before we do any creative work. We collect data, call in smart and expensive planners who studied at the best schools, and conjure thick strategy decks before any creative work gets done. Once the work gets done, we run all kinds of quant- and qual-tests before launching it. This doesn’t make any sense. Any more. Let me show you why. When I studied engineering 1998-2003, agile was already the way software development was done, though I think the term “agile” didn’t come around until 2001 or so. Before these lightweight iterative processes (and before my time) slow and bloated project management models were used where the assumption was that you had excellent information from the start and could plan the projects in a waterfall-style way – exactly like we do it today in the advertising world in other words. There are many problems with this style of project management, but one of the most important is that you spend a lot of time and money before any work makes contact with the real world. In other words, you build up an enormous amount of risk before anything gets tested in real life. In an agile framework, you try instead to quickly get something that is approximately right (a minimal viable product, MVP) in front of people to gauge their reaction so that you can learn, make adjustments, launch a new version, gauge that, and so on in an iterative process. You never take more risk than the delta of work that went into your project since your last iteration. In the advertising world, it used to be true that we had to make bank-breaking bets on media investments in tv and other broad media because that was the only choice we had. If the creative was off for any reason, it was a disaster that could break your budget and probably get you fired. While these types of campaigns are still a reality for many marketers, you also have other options in your toolkit today. You can buy 10000 people on Instagram, for example, to test something out. And it’s a real test. Not one done on a paid focus group or some other frankly quite naive lab test. To me, this makes it almost criminal to work in an 80’s style waterfall model when developing a campaign. Instead, you quickly want to build an MVP based on your current humble understanding, try it out in small live campaign, gauge, adjust, and relaunch (or build, measure, learn as we used to say in the software world). You still want to have the smart people in the room, but now you want them involved, working with the creatives and the client iteratively and as a team, throughout the entire process. I realize that this breaks up the workflow for most agencies and clients. I realize that you may have a management structure that doesn’t support the quick sign-offs or delegation of responsibility necessary for these rapid live iterations. I realize that it requires in-house production to be feasible. But still, this will raise the quality of your work, the speed of your work, the effects of your work, and the efficiency of your work to such an extent that all those issues fade in comparison. If I started an agency today, I would go all in on this way of working. I would also go all in on helping my clients through education, training, and hands-on guidance, to adjust their internal management structure and culture to facilitate an agile communication process. Considering the enormous amount of money being wasted at the moment, this is nothing less than a golden opportunity for all parties. Jim Highsmith, legend of agile, and one of the seventeen signators of The Agile Manifesto will get to end this text because he puts it better than I ever could: “The Agile movement is not anti-methodology, in fact, many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used tomes. We plan, but recognize the limits of planning in a turbulent environment.” So – do you want to sell sugared strategy decks for the rest of your life or do you want to come with me and change the world?
OPCFW_CODE
How are the hashes in tumblr image urls generated? Disclaimer: hobbyist here, not a professional programmer. Working on a pet project, nothing that would be useful in the real world. I'm trying to determine whether the hashes in the url of a tumblr image are in any way related to the contents of the image itself. A typical url looks like this: media.tumblr.com/3b675b5cdc9c6f9414626ba7e0c62f96/tumblr_n8949eWEIi1rw1wnno1_400.gif As you can see, there's a 32-character hash and another 19-character hash. I've tried all of the hashing algorithms supported by PHP 5.4.24, but none of them produces either of these codes. I've looked at the useless tumblr api, and done some searching around, but I can't find anything about how these codes are generated. Does anyone outside of tumblr know? I am not 100% sure they use PHP for hashes: http://www.quora.com/Tumblr/What-is-Tumblrs-technology-stack Sadly it seems its a big secret, wish I could be more help. I doubt that it's just a hash of the content, people upload duplicates all the time and it has to result in a different url. They are likely randomly generated numbers. I am looking into this right now. According to this Image URL Naming Scheme the "path hash" is generated from the sha1sum of the original uploaded file (not the resized _1280 one). URL | Post ID https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_o52i3aUCp21s6b18b_raw.png<PHONE_NUMBER>80 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_o5p15wTJvV1s6b18b_raw.png<PHONE_NUMBER>58 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_o6krbiugVC1t8vyl1_raw.png<PHONE_NUMBER>81 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_o8b40w7wGg1t8vyl1_raw.png<PHONE_NUMBER>86 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_o8ze2tfKNF1t8vyl1_raw.png<PHONE_NUMBER>96 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_odemy29NAE1tnns90_raw.png<PHONE_NUMBER>01 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_oeca19dxFg1t8vyl1_raw.png<PHONE_NUMBER>51 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_oefltuWf7l1t8vyl1_raw.png<PHONE_NUMBER>26 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_ofvlu8QK0T1sjl8et_raw.png<PHONE_NUMBER>61 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_oio5kwLNvv1t8vyl1_raw.png<PHONE_NUMBER>91 https://data.tumblr.com/20b42c7d2d3613dbd9450b5a506cfbd3/tumblr_inline_p1brk7DTpU1tdsqfw_raw.png<PHONE_NUMBER>60 the Post ID is what you get in a typical tumblr API response. Another proof that the original file's content is used for generation: https://data.tumblr.com/20aa2775a98061db21f6f86ad46df399/tumblr_noafrk8hB41uowduuo1_r1_raw.png<PHONE_NUMBER>04 https://data.tumblr.com/20aa2775a98061db21f6f86ad46df399/tumblr_o87jds0wPW1uowduuo1_raw.png<PHONE_NUMBER>45 The r1 means revision number 1, the file has been altered by the original author, but the path hash and filename are kept the same.
STACK_EXCHANGE
You’ve tested your solutions to user needs and built up a clear picture of what it will take to build and operate your service. Now you will build an end-to-end service, test it in public and prepare to go live. The objective of a beta The objective of this phase is to build a fully working service which you test with users. You’ll continuously improve on the service until it’s ready to go live, replacing or integrating with any existing services. This is achieved by providing the user stories in the backlog created in the alpha phase. This is the time to resolve any outstanding technical or process-related challenges, get the service accredited and plan to go live. You’ll also be resolving technical and process challenges, meeting for the first time many of the technical criteria outlined in the service standard. You should be rapidly releasing updates and improvements into the development environment, and measuring the impact of your changes to the key performance indicators (KPIs) established in your discovery and alpha phases. You’ll also test the assisted digital support for the digital service. You might test one or more of the options you developed in the alpha phase. How to publish a beta There are various ways of running the beta phase. In all instances, it should involve interacting with a full, end-to-end version of the service. This is a beta that is not open to everyone – either regional, or invite only etc. You might want to choose this option because it: - gives more control over the audience demographic that gets to use the beta - allows you to restrict the volume of transactions that go through the beta - lets you start small and get feedback faster before rolling it out to a wider audience A public beta is made open to everyone. It can exist alongside an existing version of the service and you might: - use something like AB-testing to funnel some traffic to the beta - invite people with a separate call to action to use the beta Duration of the beta phase The exact duration of your beta will depend on the scope of your project, but an appropriately sized team shouldn’t take more than a few months to create a beta. Following the release of your beta you’ll spend some time iterating on the service until it is ready to go live. You’ll now know what size team you need to create the service, scoping it in response to the findings of your alpha prototype(s). It will be run by a single, suitably skilled service manager, and will include designers, developers, web operations specialists and performance analysts as appropriate. At the end of the beta phase, you’ll have: - delivered a (private or public) end-to-end service - a collection of prioritised work to be done (your backlog) - a user testing plan - accurate metrics and measurements to monitor your KPIs - tested the assisted digital support for your service - a working system that can be used, for real, by end users
OPCFW_CODE
By Craig Utley I used to be a piece disappointedI needs to first say that Craig's e-book approximately PerformancePoint Server is a brilliant creation booklet to tracking and analytics with PerformancePoint Server. for individuals like me which are searching for extra at the making plans facet of the product i used to be dissatisfied. just one out of 10 chapters contained information regarding making plans the remainder of the +300 pages is set quite a few features of tracking and analytics. i believe that the assertion from Russ Whitney improvement supervisor tells every little thing in regards to the book."An very good creation to PerformancePoint Server"For humans drawn to making plans they need to opt for one other booklet. Read Online or Download Business Intelligence with Microsoft® Office PerformancePoint Server 2007 PDF Similar enterprise applications books Cloud Computing for Dummies is a smart commence for individuals seeking to study extra concerning the cloud. it is also an excellent reference for those who be aware of the fundamentals, yet desire additional information and insights on workload administration, criteria, safety and governance. there are various books on quite a few elements of cloud computing, yet this can be most likely the broadest point of view and insurance of this fascinating and complicated new atmosphere. Enhance functions for any state of affairs with our hands-on consultant to Microsoft Dynamics CRM 2011 evaluate Create your first program quick and without fuss. increase in days what it has taken others years. give you the approach to your company's difficulties. intimately Microsoft Dynamics CRM is an out of the field resolution in your business's revenues and advertising wishes. This step by step educational will take you thru Oracle PeopleSoft monetary administration nine. 1 and enable you enforce it into your small business. it really is written in an easy-to-read kind, with a powerful emphasis on real-world, useful examples with step by step motives. This publication will determine an exceptional beginning on your efforts to turn into a profitable PeopleSoft Financials practitioner. - Word Processing with Word: Learning Made Simple - Directory Services: Design, Implementation and Management (Enterprise Computing) - IBM Cognos Business Intelligence 10.1 Dashboarding Cookbook - Finance - QuickBooks Accounting Manual Additional resources for Business Intelligence with Microsoft® Office PerformancePoint Server 2007 To illustrate the sizing of a warehouse, consider the same three dimension tables: Time, Product, and Customer. If tracking 10 years of data with a daily grain, this means that the Time table will hold approximately 3652 records. The company might have 10,000 products and sell to 100,000 customers. 65 trillion records. Even at one tenth of one percent of this value, the fact table would still hold almost four billion records. These are indeed impressive numbers. Because fact tables are often so large, most organizations estimate the storage requirements for their relational data warehouse simply by estimating the number of rows in the fact table, multiplying it by the size of the record, and then adding a percentage for the indexes. This way, a facility director would know where his or her facility stood in relation to others, and could call those doing well to determine what they were doing right and the best practices that might be implemented at his or her facility. The scorecard used at the time of this project was the first version of the Microsoft Business Scorecard Manager. This product continues to be enhanced and is in its third generation in PerformancePoint Server. The back end was an Analysis Services 2000 cube built from data from the Navision accounting application among other sources. Customers who purchase a license of PerformancePoint server will also get a license for ProClarity. Customers interested in using PerformancePoint Server for analytics will almost certainly install and use the ProClarity Analytics Server, and Microsoft has added a report type in PerformancePoint Server that ties to a report, called a view in ProClarity Analytics Server. 25 26 Business Intelligence with Microsoft Office PerformancePoint Server 2007 Figure 2-2 The Analytic Chart and Analytic Grid reports represent the bulk of the analytics functionality in PerformancePoint Server.
OPCFW_CODE
The English Environment Agency provide a flood-monitoring API (Application programming interface which allows software to access the data directly) which provides access to near real-time information including measurements of rainfall, water levels and flows. Water levels and flows are regularly monitored, usually every 15 minutes. Data is then transferred back to the Environment Agency at various frequencies, depending on the site and level of flood risk. The APIs are provided as open data under the Open Government Licence with no requirement for registration. InfoWorks ICM and ICMLive can connect to the Environment Agency flood monitoring API using a Time-Series DataBase (TSDB). Time Series Databases are described in the following blog post:- The TSDBs can then be connected to via a web-browser using the Infinity System software described on the following page:- In InfoWorks ICM version 8, a new data source type, EA RestAPI, was added which allows connection to the Environment Agency Real Time Data API for the river levels. To set up a connection, the first step is to configure a TSDB data source with a type, EA RestAPI and the server set to environment.data.gov.uk. The data source name can be set accordingly. Now in the Observed data tab, configure a data stream to look up the data source set up above. The key component will be the Table which will be station ID for the gauge that data is required for. In the below image, the table ID is 2001TH which is the Benson Lock gauge in Preston Crowmarsh, a short distance from our Wallingford Office in Oxfordshire. A list of station IDs is available from the Environment Agency webpage:- Right click on the data stream and choose ‘Test Connection’. Hopefully, you will get a response that the connection is ok. Next, right click on the data stream and choose ‘Update Data’. This will update the data associated with the data stream. By default this is a background process which will run external to the InfoWorks ICM/ICMLive user interface and pull the data from the API into the TSDB for the period specified, so in the instance below between the 1st and 30th January 2018. Once the data retrieval is complete, then you can view the time-series data, as well as graphing the data for the period of data. The data can also be accessed with Infinity System. The below image shows the gauging stations which provided data on the 29th January 2018. The data would then be periodically updated and then the data can then be graphed, analysed and interrogated within Infinity System with notifications generated as required. Also, once in the TSDB, the data can then be used in a simulation by using a TVD connector to provide the link between the TSDB data stream and an object in the network. The TVD connector can be used to represent boundary conditions (ie, level boundary) or observed data for comparison with the simulated data. Currently the EA RestAPI data type only obtains observed data from the river level data. It is possible to access other data but this requires the development of a Ruby Script to be used within the TSDB.
OPCFW_CODE
View Full Version : How you make a clear sketch from a picture? 02-07-2008, 04:12 PM If i download a picture from reflib, i use photoshop feature Filter "fotocopy" to make from this picture a sketch. If a picture has many small details like flowers etc. the sketch is not so clear to transfer them. Has anyone use another digital method to make a sketch from a picture? Nico, I use Painter and it has a sketch feature under the Effects which sounds very similar. It isn't perfect by any means, and I sometimes just print out a colour copy if I'm working small enough. Painter also has a trace feature which allows you to trace by hand the outlines that you want. I tend to use that when I'm working larger. My printer handles paper up to 13" X 19" so I can do this. When I'm transferring an image to my paper, I have it up on my screen as well, so that I can check for things which aren't clear on the printout. I don't try to trace the smallest details, just the main outlines and key location features. Jane 02-07-2008, 05:33 PM I think we use the same feature. Which application use you, to print out in a large scale?? i use proposter to print-out in which dimesion i wont. 02-07-2008, 07:32 PM One nice feature in Photoshop is what they call the "Magnetic Lasso" selection tool which works somewhat like tracing, but tends to identify the "edges" your are tracing and "stick" to them fairly closely. This makes it easier, especially if you don't have a tablet/pen combination and are trying to do this with a mouse. I also like to use separate layers for the original photo and the traced elements, or other elements added by hand, so you can select which layers are visible and/or printable. This make it easy to print just the outline you have produced without having to separate it from the photo. Nico, I just print it out using Painter. If I'm going to use a canvas board, I crop the image and resize to a standard size. If I'm using paper, I just resize it to the size I want the final to be. Bill, Painter automatically creates a separate layer for the tracing with the tracing paper layer set to 50%, but that's modifiable. 02-08-2008, 01:03 AM Thanks Jane, I am just getting started with Parinter so I'm still gettiing my bearings. 02-08-2008, 06:11 AM Can you with the painter print a eg. 40" X 50 " picture in a A4 printer? or is the picture limited to A4 format because of the printer? 02-08-2008, 09:39 AM I have photoshop, but have never used these features, I mostly use it to mess with the color. I just tend to draw it. Sometimes grids are helpful in transferring an image. Bill, the command I'm thinking of is the Quick Clone under the File menu. I can assure you that, after having various versions of Painter (currently I'm at 9.5), I'm still getting my bearings. It is a very rich environment in which to work and I find it hard to locate all the palette/brush choices, etc. I've only used a small portion of what's possible. Nico, usually you are limited by your printer, secondarily by the camera or other image source. For example, photos taken by my camera stored in Tiff format are in the range of 40"x 30" at 72DPI. I could print it at that size and precision if I had a printer that could do that. I bought my particular printer (HP Deskjet 1220C) so that I could print images larger than the usual home printer (max 8 1/2" X 14"), without having to resort to printing sections of the image and piecing them together. I think if you get much beyond the 13" X 19" that mine can handle you're into large format printers that Graphics shops use. Jane 02-08-2008, 01:07 PM Pat . i have also photoshop cs3 but i can't find this option. How can i print a large image in printing sections so i can put then together so i will have a large print out ? until today i do this with proposter. Jane : has painter X this above feature? My printer is the HP laserjet 1018. I wont only Print B/w and sketches. Nico, I don't think that Painter can print sections of the image. But you could ask this question in the Painter's Alley subforum of the Digital Art forum. Jane 02-08-2008, 02:47 PM Printing an image in sections for piecing together is not handled by the software, but typically is one of the options under the printer preferences menu when you begin a print job. Typically the software will have a drop down menu to select the printer you want to use. Somewhere nearby will be a button for setting printer preferences. Each model (I use only HP) will have slightly different menu sets even for the same brand. To set up a grid on a photo in photoshop you must put a check mark by the show gridlines under the view menu. To actually set the grid up you must go to the photoshop preferences menu as shown below Notice the central section of the dialogue box labeled GRID. Here you set the color, type, interval, units, etc. of the grid. To print a large image out in pieces do the following: Go to the Print with Preview Menu and select Page Setup: At the bottom of the dialogue box you will see a button Printer. Select this to open the printer setup: After selecting the desired printer, click on Properties: In the printer properties dialogue select the finishing tab (depending on your model these menu options may look slightly different). What you are looking for is the dropdown menu in the lower right of the printer properties called "Tiling" . As the graphic above shows, I have selected a tiling of 3x3 which means the original image will be expanded and printed in pieces on 9 separate sheets of paper with dotted lines to indicate how to align them to piece together. I hope this answers your question, Nico. 02-08-2008, 02:56 PM Thanks, so much for doing this, Bill. I have been trying to find a similar thread that I saw in the pastel talk forum, but to no avail and here you've done it! 02-08-2008, 03:05 PM Pat- I love doing things like this! If there is any way I can help on digital questions (not that I know everything, but I have been working with PC's since they came out), please let me know and I would be glad to try and respond. Sometimes it is quicker just to explain something again than find it. Perhaps frequent questions like this might be fodder for articles using the publisher. If you have any ideas I would be glad to contribute. Just don't want to wear out my welcome. Bill:wave: :wave: :wave: 02-08-2008, 03:10 PM I knew you knew that...duh...the brain was somewhere else. I will keep the article info in mind. This actually may be a useful one. I'll check it out. 02-08-2008, 03:20 PM After carefull search, i am sorry my Printer software(HP Laserjet 1018) has not this option. BTW : he has only very simple options. 02-08-2008, 04:31 PM Hey, I just noticed that my new printer has this option, too :clap: . Thank you very much, Bill :wave: 02-08-2008, 08:54 PM Silvia- Glad you could use the info! Nico- I went to the HP site and looked up the manual for your printer. I also could not find the function listed (perhaps because it is a Laser printer while I think most of the others have been refering to Inkjet printers. Jane is correct that any of the printers that exceed 13x19 are generally outside the bounds of what is considered consumer market. I have an HP 9650 (replaced by newer model in HP's line now) that also does up to 13x19. The price has come down on similar models. I paid about $400 but think you can probably find them for around $200+- USD these days. I have noticed, though, that the larger the format of paper the printer uses it seems the smaller (and thus shorter lived and more expensive) the cartridges of colored ink are they use. 02-10-2008, 05:09 PM I use CorelDraw and set my page up to the size I want to paint, like 9x12 or 16x20. Then when I print, I pick a feature called "Tile pages" which prints part of the picture on each 8-1/2x11 printer page and then tape the pages together to make one large sheet, put graphite tracing paper between the sheet and my support and trace the major elements. I can get as detailed as I want. vBulletin® v3.5.8, Copyright ©2000-2019, Jelsoft Enterprises Ltd.
OPCFW_CODE
Is there an equivalent of EXPLAIN that will work in front of an ALTER TABLE query? It looks like the MySQL EXPLAIN prefix only works in front of certain queries. Is there an equivalent of EXPLAIN that will work in front of an ALTER TABLE query? I would love to be able to find out how long my planned ALTER TABLE statement is likely to take. Background: I have a table from someone else that contains 300 columns of data. I know that I'm only going to need to use a few of those columns, and in order to figure out which columns I need, I'm planning to do a full-text search for a few key words. But in order to do that, I need to add a full-text index. And since I'm new to this size of data set, I'm not entirely sure that this is a realistic plan. I'm hoping something like EXPLAIN (or, more likely, a substitute tool from this thread) might help determine that. EDIT: In answer to a couple questions below, I should mention that this table has about 4 million rows and is on a local testing machine. So I can just run this thing blindly if needed. I just don't prefer to if possible. Thanks for all the good information so far. How many rows are there? There is a bug report about the lack of this functionality, but the bad news is it has not been updated for over 6 years: http://bugs.mysql.com/bug.php?id=34354 Can you create a duplicate of the table or use a backup and run it offline to get a benchmark? See Fast Index Creation, which, I believe, is supposed to be delivered with FTS in 5.5. Thanks to everybody for the good expository questions and links. @Marcus: Good question -- I should have included that. I have around 4 million rows, which is easily an order of magnitude more than I've dealt with before, so I'm not sure what to expect. @Jestep: Good call. I'm actually running on a local instance right now, so that's the good news. I basically just wanted to see if I could leave it running on my 4 million rows overnight and expect the search to be done in the morning. Even a few days would be acceptable. A few months would not. :-) Most "Alter table" will trigger the copy to tmp table operation, which it will create temp table with new schema, then lock table, copy data from old table to new table, then rename, drop old table. So most time consumed is copy to temp table, it's depend on how big of that table if the server have enough memory. Use show table status to check how big of the table (data_length+ index_length), sample on other table to know the transfer speed on your mysql server, then you can estimate how long it will take. Another way mentioned on mysql doc about explain on DML, but I didn't got result, maybe not finished yet : http://dev.mysql.com/doc/refman/5.6/en/explain.html As of MySQL 5.6.3, permitted explainable statements for EXPLAIN are SELECT, DELETE, INSERT, REPLACE, and UPDATE. Before MySQL 5.6.3, SELECT is the only explainable statement.
STACK_EXCHANGE
We got Moon Monster Madness and Legend of the Vampire. Were those the two movies that you just picked from hat or something? She wears them in a lot movies, doesn't she. Even though they've been retconed to be the name of the shoes in Scooby-Doo! and WWE: Curse of the Speed Demon, they've still only been identified by name in that movie, and I'm in no hurry to list about 20 or so appearances. And they have no other significance outside of Stephanie pointing them out, so there's no other scene to point out. When they find the key saying" The Old Bell", there is a painting above them. It moves its eyes sort of crossed-eyed at the end of the scene. Could you 'plain[explain] Why you did this? That was what I was gonna add. Make sense? It's not really an error just because they didn't it. They could've spotted it, they just didn't this time. Thanks for moving it to where I said, though (even though you added an empty heading, among other things). I'm sorry i'm not a regular wikian for pages, so please forgive me, I don't know how to change stuff like that. I am a regular Spongepedia contributor, but if you tell me what/ how to fix it in step-by-step inscrutions i will, thanks. P.s, I changed it to trivia. I know, I said that and I thanked you for doing it as well. It's not the changing part, it's the adding stuff that didn't need to be there to begin with. If you're a regular Spongepedia contributor, then you should be pretty used to the way things work. Ok. So yes, I can read, my library card states that I borrowed 492 books the past few years. To avoid confusion and to be absolutely clear, I am, however, rather impatient and don't really bother reading the whole article, I just check in the paragraph I'm working on to make sure nobody has alrady done what I'm writing. But if you really insist I can try and read the whole article. For some reason I have a feeling that you find me annoying [because I work on so many articles]. I don't find you annoying, exactly. I can think of more annoyances, if that makes you feel better. True, I find it annoying when you point out stuff you do, only for me to have to check to see if I need to fix anything. Calling yourself an annoyance just seems like you're excusing yourself in a bad way. What is a major pain, is when I make a separate page for Dusk (Scooby-Doo! Mystery Incorporated) and people still ignore that and figure it's just as good to make these super great edits at Dusk (the wrong page), instead. I don't honestly care if you think I'm annoying or no, I'm not a sensitive idiot. It doesnt make me feel better, and it doesnt make me feel worse. I am merely stating what I think. I did not call myself an annoyance, I'm just saying you may think I am annoying because of the way you behave sometimes. Thanks for letting me know about the dusk thing, I never noticed, otherwise I would have checked that instead. I did not excuse myself- apologies if it seemed that way.
OPCFW_CODE
First published on MSDN on Mar 03, 2017 Periodically we are asked how to split an existing filegroup into multiple data files. The quick answer is that SQL Server does not have a built-in way for you to do that automatically, but you can do it yourself. The process is relatively simple and I have provided a script that demonstrates one technique. The script provided is not designed for production and is only provided for illustrative purposes. There have been many articles in the past that talk about using the ALTER INDEX …. REBUILD option to move objects from one filegroup to another and to "rebalance" that way. This author acknowledges the benefits of that technique, but sometimes the question of "rebalancing" is more driven out of simple "geometry" constraints. For example, if I have a database on a volume that I cannot grow, and I simply want to add new files to the filegroup – but have those files reside on a different volume. Adding the new files is quite simple, but by default, the existing file remains essentially full and there is an imbalance between the old and new files. The technique proposed here will effectively rebalance and move the data out of the existing file across to the new files in such a way that the original file can be reduced in size and thus free up space on a volume that is filling up. It should be noted that while this technique "moves" data from the original file to the new files in the same filegroup, it does not guarantee that all objects residing in the filegroup are "balanced". Some objects, depending upon their location in the original datafile, may have some data move, all data move or no data move. Ultimately the total amount of allocated pages will be balanced among the various files in the filegroup, but there could still be some hotspots for certain objects. This article and associated script does not attempt to deal with that issue. High level process for splitting a filegroup into multiple files This process works well and can be done "online" – that is, the objects in the filegroup can be accessed during the splitting process. You should take into consideration that there could be a lot of I/O during this process. In addition to potential performance impacts, databases that participate in an AlwaysOn Availability Group, database mirroring or even log shipping can also be impacted due to the number of log records that are generated – all of which need to be shipped to the respective secondar(ies). This diagram depicts the intended outcome – to take a filegroup with a single data file in it, and split it into multiple data files. Step 1: Add new data files to the filegroup The first step in splitting a filegroup into multiple data files is to add one or more new empty data files to the filegroup. In this example, the desired goal is for the original file in the filegroup to be 1/4 th its original size and have a total of 4 files of equal size in the filegroup. In order to do this, we need to add 3 new data files to the filegroup that are each 1/4 th the size of the original data file. --add (@numfiles-1) files to file group SELECT @loopcntr = 2; WHILE @loopcntr <= @numfiles BEGIN SELECT @NewLogicalName = @LogicalName + '_' + CAST(@loopcntr as varchar(5)) SELECT @NewPhysicalName = REPLACE(@PhysicalName , '.mdf', '_' + CAST(@loopcntr as varchar(5))+'.ndf') SELECT @sql = 'ALTER DATABASE [' + DB_NAME() + '] ADD FILE ('+ @crlf + 'NAME = ' + @NewLogicalName + ',' + @crlf + 'FILENAME = ' + QUOTENAME(@NewPhysicalName, '''') + ',' + @crlf + 'SIZE = ' + CAST(@NewFSizeMB as VARCHAR(max)) + 'MB,' + @crlf + 'MAXSIZE = ' + CAST(@NewFSizeMB as VARCHAR(max)) + 'MB,' + @crlf + 'FILEGROWTH = 0MB) TO FILEGROUP ' + QUOTENAME(@FileGroupName) +';' + @crlf + @crlf PRINT @sql exec (@sql) SELECT @loopcntr += 1 END Step 2: Disable autogrowth on the new data files The reason for this will become clear in the next step. In the sample script provided with this article, step 2 was actually done in combination with step 1 by setting the FILEGROWTH parameter to "0MB" in the ALTER DATABASE … ADD FILE command. (see above code segment). Step 3: "Empty" the original data file After the new files have been "capped" we are ready to "rebalance". This is done by executing a DBCC SHRINKFILE command on the original data file with the EMPTYFILE option. This will take the data from the "end of the data file" and move it into the 3 newly added data files. Since each of those files have the same free space in them, the proportional fill algorithm will evenly distribute the data from the original file into the three new files. The filegroup will go from this --empty the original file -- which will move data into the new files SELECT @sql = 'BEGIN TRY' + @crlf + 'DBCC SHRINKFILE (' + @LogicalName + ', EMPTYFILE)' + @crlf + 'END TRY' + @crlf + 'BEGIN CATCH' + @crlf + ' IF ERROR_NUMBER() <> 2556 BEGIN' + @crlf + ' SELECT ERROR_NUMBER(), ERROR_MESSAGE()' + @crlf + ' RAISERROR (''Severe error moving data into new files. MANUAL cleanup necessary. Terminating connection...'', 19, 1) WITH LOG' + @crlf + ' END' + @crlf + 'END CATCH' + @crlf + @crlf PRINT @SQL exec (@sql) The reason we disabled autogrowth on the three new files is to prevent the original file from getting "too empty". In this example, we want 4 files of equal size when we're done. If we had not prevented the 3 new files from autogrowing, they would have kept growing until the first file was either empty or until all objects capable of moving had been moved. This would not have left us in a balanced state, but in a state that would have looked something more like this. Step 4: Re-enable autogrowth and set the size to match for all datafiles At this point we want to make sure that all the files are set to have the same maximum file size and autogrowth paramters. This is done so that if the files become full and need to autogrow, they will be set to grow at the same amount – thus leaving the same amount of free space in all the files. --set all files to have a MAXSIZE and enable autogrowth SELECT @loopcntr = 1; WHILE @loopcntr <= @numfiles BEGIN SELECT @NewLogicalName = CASE @loopcntr WHEN 1 then @LogicalName ELSE @LogicalName + '_' + CAST(@loopcntr as varchar(5)) END SELECT @sql = 'ALTER DATABASE [' + DB_NAME() + '] MODIFY FILE ('+ @crlf + 'NAME = ' + @NewLogicalName + ',' + @crlf + 'MAXSIZE = ' + @maxsizeMBText + ',' + @crlf + 'FILEGROWTH = ' + @maxgrowthMBText + ');' + @crlf + @crlf PRINT @sql exec (@sql) SELECT @loopcntr += 1 END Step 5: "Shrink" the original data file to match the filesize of the other 3 new files At this point we can issue another DBCC SHRINKFILE on the first file to shrink the file to be the same size as the other 3 files. The diagram below shows the final state at this point, 4 files of equal size in the filegroup. --shrink the original file to match the new files size SELECT @sql = 'BEGIN TRY' + @crlf + 'DBCC SHRINKFILE (' + @LogicalName + ', ' + CAST(@NewFSizeMB as varchar(max))+ ')' + @crlf + 'END TRY' + @crlf + 'BEGIN CATCH' + @crlf + ' IF ERROR_NUMBER() &lt;&gt; 2556 BEGIN' + @crlf + ' SELECT ERROR_NUMBER(), ERROR_MESSAGE()' + @crlf + ' RAISERROR (''Severe error moving data into new files. MANUAL cleanup necessary. Terminating connection...'', 19, 1) WITH LOG' + @crlf + ' END' + @crlf + 'END CATCH' + @crlf PRINT @SQL exec (@sql) I have included here a .SQL text file containing the example script for this article. For more information regarding how SQL Server deals with multiple files in a file group, please check some of these following references. You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
OPCFW_CODE
We need a Shopify app. This does not change the fulfillment status of the order. It simply is allowing us to create POs or drop-ship emails from the day's orders. Here's how it works: Each product is set to have Purchase Order, DropShip, or None. When an order comes in, it looks at the items. If it's set to Purchase Order, it will add that item to the purchase order for that vendor. If there is already an open purchase order (not sent), it will use that one instead of creating a new one. If it's a dropship, it will create a drop-ship email. These emails are not sent automatically though. The emails remain in the PO or DropShip page until the "Send" button is clicked. Under Fulfillment Service, it should add Purchase Order, DropShip, None. When viewing an order, add an option to change the status of items each. For example, while a product might be marked as Purchase Order, I could change it to DropShip for a single order. Vendors: Here, we will be able to add vendors. Name, Address, Phone, Email POs: Here, we will be able to add, edit, view, and send purchase order emails. Also can mass send or mass delete. DropShip: Here we will be able to add, edit, view, and send dropship emails. Also can mass send or mass delete. Bu iş için 11 freelancer ortalamada $282 teklif veriyor *********** POS SYSTEM – READY TO USE SCRIPT ****************** Hello, We are happy to inform you that we have recently developed the POS System and it is completed and we can show you a nice demo. We have worki Daha Fazla Hi, I'm Shopify Developer with over 3 years experience. I have developed a lot Shopify Theme and Websites For the project, I estimate to cost is USD145 Relevant Skills and Experience I have experience on Shopify Theme Daha Fazla Hi there. Hope you are doing great today !! I hope, your search for right contractor end here. Working with build a Shopify app is definitely something I can take care of for you! I am a talented and experienced S Daha Fazla Hi! CAN WE DISCUSS ON YOUR PROJECT NOW, PLEASE MESSAGE ME SOON I am a "Shopify developer". And fully Expert In Shopify Press Customization, integration, Upgrade,Design and Developing e-commerce website. I have w Daha Fazla Dear Sir. I had read your descriptions about project. Kindly give me a chance, i would like to discuss more about project and sure everything clear. I am a developer very strong in below skills. Relevant Skills and Ex Daha Fazla Hello, Shopify is an outstanding platform to built eye catchy eCommerce websites, I enjoy working on it. I have worked designed and customised various shopify templates. Recent website done in Shopify :- https Daha Fazla Dear Sir, Hope you are doing great!! We have worked on more than 200 computer softwares, websites / web portals with any required API /B2B integrations, Mobile apps.(ios&android), Betting Bot(odd/even), mobile gam Daha Fazla i understand your job post very well need a Shopify app. Relevant Skills and Experience I have experience with PHP from last 6 years and I have excellant web development skills (HTML5/responsive and mobile websites, C Daha Fazla
OPCFW_CODE
What do you think about using NHibernate with SQL Server? What would you say if we were to use Entity Framework on our next project? What is your opinion on ORM frameworks? I am a sort of DBA who spends a lot of his time working with developers. Deep in the implementation trenches, cutting code, trying to prevent any future ‘server issues’ by influencing the design at early stages of development. I find it much more efficient than waiting for them to chuck some code over the fence to us when there is very little we can do, but complain about them and get upset that somehow indexes and statistics don’t solve the problem. And so I hear those sort of questions a lot and hardly every I have the time to answer them in any other way than just to say ‘it depends’. So here is an attempt on answering this question. The developer in me wants to say: Of course, use an ORM! Go code first if you can. It saves the time, it deals with the Object-Relational Impedance Mismatch problem, it keeps your code clean as there is no need for those strange looking data queries. All the code is in one place, one solution, easy to find, read and understand. It is data engine agnostic too, so we can deploy it on MySQL, Oracle, PostgreSQL or SQL Server. On anything really. But then the DBA in me wants to shout: Are you mad? Of course not! Don’t use ORMs. Ever. They produce unreadable, inefficient queries that are difficult to understand or optimise. The code first approach typically leads to inefficient schema. New database engine features are ignored because cross vendor compatibility is more important than performance. And don’t you see how those leaky abstractions of generic repositories you are using? Really, passing IQueryable to the business layer? Maybe you have the ability to run it on multiple data engine, but now your business layer depends on your ORM framework and the data model.. Read the Clean Architecture by Uncle Bob, especially the part about keeping frameworks at arm’s length. And so the developer responds: OK. So I will be more specific with my repositories… perhaps. Fine. But I’m not going to write any SQL statements. I don’t want any magic strings in my code with no support from the IDE. And no, no stored procedures. We cannot have logic split into multiple layers. All code needs to be in the repo, all code needs to be tested. Don’t you see, stored procedures just don’t fit in the modern software development cycle. Besides, we have developers who can write LINQ and don’t need to know any SQL. But the DBA with a smug look on his face says: Ha! That idea of abstracting away technology, so that you don’t have to understand it has been tried before. Sometimes it works, sometimes it doesn’t. What happened to WebForms. Wasn’t it the idea to hide HTML and JavaScrip to make web development easier for existing windows developers. How did that go? And that’s how it starts again and again, and the discussion in my head goes on and on. But eventually I come to similar sort of conclusion time after time, and here is what I actually do. (It is a compromise on which both the developer and the DBA in me agree on, allowing me to stay sane). - For Proof of Concept work I use ORMs and the code first approach. That saves a lot of time and effort, and the code will be a throwaway anyway. My ORM of choice is Entity Framework but it doesn’t really matter. - I don’t spend much time thinking about data types. In most cases string defaulting to nvarchar(255) is good enough for a PoC. - I prefer to use EF Core as it supports in memory storage for even faster PoC development and testing. - Just in case it is not thrown away (as it should), I keep my architecture clean. I make sure to use specific repositories for data access, and that the repository abstraction is not leaking any implementation details. A repository takes and returns business objects and is using ORM framework internally only. - On projects which will not be thrown away I start with Dapper (a micro ORM) and stored procedures. It is a bit more work but forces me to design the data structures better, and offers a lot benefits for the future (more about it later in this post). - While I agree that logic should be in one place, there are different types of logic, and those should be implemented independently. There is UI Logic, there is Business Logic and there is Persistence Logic which I implement in a repository or in stored procedures. A good example would be a soft delete functionality. - All SQL code is kept in the same solution, is tested and deployed through the normal CI/CD channels using DbUp project. So my answer is Use ORMs as long as they work for you, but architect your code in such a way, that you don’t depend on them, and ready to ditch them when they start to cause more problems then they solve. Consider micro ORMs. Try Dapper. Here are a few more benefits of using Dapper with stored procedures - Dapper has much smaller footprint than NHibernate or Entity Framwork. - Dapper is faster, almost as fast as a DataReader, when compared to full ORM frameworks. According to this at the moment of writing this post it is 10 times faster than NHibernate. - While being small and fast Dapper still takes the ORM problem away. - Stored procedures add some extra code that needs to be written, but allow access to the latest database engine features. In case on SQL Server those can be Hekaton (In-Memory OLTP), JSON or XML data types, graph structures, temporal tables, windowing functions and much more. - Stored procedures make performance troubleshooting and reviews much easier. For DBAs it is much easier to understand which part of an application creates the load and therefore what it is trying to do with well named stored procedure rather than a lot of auto generated SQL statements. - The cooperation with Database Developers is much easier, as they can easily identify queries that need to be optimised, and then improve them without worrying (too much) about any non SQL code. - Even if you don’t have DBAs and DBDs just now, you might in the future. If the business is successful it might be that you suddenly need to get somebody with those skills to help you. Having good structure, with a separated data layer will make their life easier, your bill lower and everybody happier.
OPCFW_CODE
GitHub Copilot X: The AI-powered developer experience GitHub Copilot is evolving to bring chat and voice interfaces, support pull requests, answer questions on docs, and adopt OpenAI’s GPT-4 for a more personalized developer experience. CI/CD and workflow automation are native capabilities on GitHub platform. Here’s how to start using them and speed up your workflows. The first time I saw a CI/CD pipeline in action was a real wake-up moment. I was working at a company that used GitHub Actions to cut its release times down to five minutes. And if any issues cropped up, you could roll back a release with the touch of a button. At that time, I had just finished a stint at a startup where the release process was far more manual and far more anxiety-inducing. We were a small team and without the benefit of a CI/CD pipeline or blue-green deployments, we could only release updates when users were less likely to be on our app late at night. It was a tedious process where any degree of human error would stretch out how long it took to deploy a build. Yet that first experience with CI/CD turned a buzzword into something tangible and impressive, fueled by automated workflows running on GitHub Actions. Now, in a funny twist, I’m working at GitHub where GitHub Actions has become a personal focus area. So, for anyone just getting started with CI/CD and workflow automation on GitHub, I want to turn my experience of being introduced to GitHub Actions into a resource. Let’s get started. For the uninitiated or anyone who’s heard of it but doesn’t fully understand it, GitHub Actions is a native CI/CD tool that runs alongside your code in GitHub. In fact, you may have noticed a tab that says “Actions” in a GitHub repository at some point (hint: that’s where GitHub Actions lives). A screenshot showing the GitHub Actions tab in a repository Once you open this tab up for the first time, you’ll find a quick description of what GitHub Actions is and some suggested workflows for your repository. That’s where the fun starts. GitHub Actions comes with more than 13,000 pre-written and tested CI/CD workflows and pre-built automations in the GitHub Marketplace, as well as the ability to write your own workflows (or customize an existing workflow) in easy-to-use YAML files. A screenshot of the introduction screen a developer will see the first time they open GitHub Actions in a repository I’ll walk you through how to build your own GitHub Actions workflow later on, but I’ll leave you with this for now: A GitHub Actions workflow can be designed to respond to any webhook event on GitHub. That means you can turn any webhook on GitHub into a trigger for an automation within your CI/CD pipeline—and that includes third-party webhook events too. So, we’ve talked a bit about a GitHub Actions workflow—but sometimes it’s easiest to just see one in action (pun intended): A screenshot of an example GitHub Actions workflow The above workflow is composed of a few different things. These include: When you put all of these concepts together, you get a workflow that might look something like this: on: issues: types: [opened] jobs: comment: runs-on: ubuntu-latest steps: - name: Rick Roll uses: email@example.com with: percentage: 100 (For reference, this is a fun GitHub Actions workflow you theoretically could make part of your CI/CD pipeline. It posts a GIF of Rick Astley as a comment on every new issue that’s opened in a repository, which is scientifically proven to bolster your productivity and general enjoyment. I promise.) In my experience, there are four common ways I use GitHub Actions (and don’t worry, I’m including links to pre-built workflows that you can drop into your repository and start using right away). These include: At this point, it shouldn’t surprise you to hear that a powerful and common use case for GitHub Actions revolves around CI/CD. It’s far from the only CI/CD platform out there (and you can integrate just about any CI/CD platform into your GitHub workflow), but it’s benefits stem from its close integration with the rest of the GitHub platform—and the ability to trigger any part of a CI/CD pipeline off of any webhook on GitHub. Here are some useful, pre-built GitHub Actions CI/CD workflows you can use to get started: Release management is a critical part of a CI/CD pipeline. Yet you can also automate your releases even without a fully baked CI/CD pipeline. Whichever path you choose, here are two really helpful GitHub Actions workflows to level-up your approach to release management: Whether we’re talking about part of your CI/CD pipeline or part of your normal workflow, there’s a good chance you’re using more than one tool when you’re building code. Making sure all those tools integrate with one another can be one of the less fun parts of development work—but it’s an important step. Here are two types of workflows that I find exceptionally helpful for any beginner: As I’ve spoken with open source maintainers and people at companies, I’ve heard time and again how time consuming maintaining an active project, team, and/or community can be. In addition to CI/CD, GitHub Actions is a great tool for automating repeatable, yet often manual tasks within an organization, as well as managing projects and teams at scale. In that vein, here are a few useful GitHub Actions workflows that I’ve come across: Just in case the above workflows aren’t enough to keep you busy, I wanted to give you a few more. In our Starter Workflows repository, you can find a bunch of pre-built GitHub Actions that are ready to use for continuous integration, continuous deployment, code scanning, and workflow automation. Every one of these workflows has been built and tested by the GitHub team—and they’re updated regularly too. One of my personal favorites is CodeQL, which brings GitHub’s static code analysis engine into your workflow to identify any known security vulnerabilities in your code. Also, there are plenty of other pre-built workflows for any number of things you may be working on. Since there are more than 13,000 GitHub Actions in the GitHub Marketplace, there’s a good chance you won’t need to create a workflow from scratch, since one probably already exists. Yet there probably will be a few times where you find a workflow that’s almost perfect, but needs a slight tweak to fit your needs perfectly. In this situation, you can either create a new workflow or customize a pre-built workflow. And if you’re wondering how to customize a workflow, try reading this article I put together. Find out how to customize a pre-built GitHub Actions workflow Sometimes it’s easier to learn by watching someone else do something in real-time. So, if you’re trying to build your own GitHub Actions workflow, watch this video to learn how to build your own action in less than 10 minutes.
OPCFW_CODE
Ordnance Survey, Scrum Master February 2013 - Present, Southamtpon, United Kingdom ➢ Day-to-day Scrum Master Agile coaching, running sprint planning, stand-ups, backlog refinement and release planning meetings, retrospectives, demos. ➢ Building the team of 8 developers, testers and designers and delivery of website services based on Java, Oracle and SAP. ➢ Building Scrum Community and know-how of best practices for practical implementation of Scrum Work Experience and Scrum experience - 6 months (aprox 120days/960h) Aviva, Scrum Master and Agile Coach April 2012 - December 2012, Norwich, United Kingdom ➢ Day-to-day Scrum Master work with team involving running sprint planning, stand-ups, backlog refinement and release planning, retrospectives, demos. ➢ Building the team of 15 dev, testers and designers and delivery of website application project using .net technology and wide integration with ESB, Oracle, Exceed. ➢ Introducing Scrum methodology and Agile techniques into the waterfall organisation, including CI based on Jenkins, TDD, Kanban Board, Story Point estimations, pair programing and code reviews. ➢ Day to day work with stakeholders to provide understanding and commitment to Scrum methodology ➢ Setting up and managing Jira Server for Aviva Work Experience and Scrum experience - 9 months (aprox 180days/1440h) Philips&Wipro, Agile Coach October 2011 - February 2012, Eindhoven, Brabant, Netherlands ➢ Agile Coach for Wipro Scrum Master focus on mastering iterative approach, continuous integration, backlog management, requirements definition for each new team build by Philips and Wipro ➢ Scrum Master for onsite team in Eindhoven, building Philips.com application (Java+ESB+SAP+Oracle) ➢ Definition of KPIs to measure Software development and Scrum teams. Work Experience and Scrum experience - 4 months (aprox 80 days/640h) Yell LTD, Scrum Master February 2011 - September 2011, Reading, United Kingdom ➢ Scrum Master for 2 teams (15 people) responsible for sales tools (GUI+Java+ESB+SAP) projects. ➢ Managing key stakeholders and acting both as Project manager and Product Owner including scope definition and backlog management. ➢ Agile Coach responsibilities for implementing iterative approach, continuous integration, backlog management, requirements definition. Work Experience and Scrum experience - 7 months (aprox 140days/1120h)
OPCFW_CODE
Just about every individual you understand makes use of Netflix or Spotify for some high quality downtime. These streaming services are due to software engineers as properly. The development course of can additionally be much more than programming alone. This web site is using a safety service to protect itself from on-line attacks. The action you simply carried out triggered the safety answer. There are a quantity of actions that would set off this block together with submitting a sure word or phrase, a SQL command or malformed data. Discuss opportunities for progress and development inside the staff or group. To create consistent and open communication, set up an everyday cadence (weekly or bi-weekly) for one-on-one conferences with every group member. For occasion, if a staff member is battling a specific problem or project, you could determine to extend the frequency of your one-on-one meetings to supply extra support and guidance. Helping workers establish areas for development and growth, in addition to provide guidance on how to obtain their profession objectives is essential for long-term success. In this information, we’ll explore some best practices for engineering managers hosting one-on-ones with team members. The value of hiring a Flutter Developer can vary relying on the scale of the company, their price range and also the seniority of the function. The Future Of Software Program Engineering Through these, you’ll have like-minded individuals around you to assist with coding or different work-related issues. It’s additionally a nice way to share your ideas with like-minded people. New platforms are additionally created continually, and corporations need Software Engineers to keep up with the altering instances https://www.globalcloudteam.com/. If there’s a brand new digital platform that would benefit a company, a Software Engineer is there to assist the organization transition. You can use that open source project you worked on, your bootcamp projects, a ardour project, or freelance work to indicate hiring managers and recruiters what you can do. However, in addition they have the opportunity to work on thrilling and progressive initiatives, collaborate with other skilled professionals, and make a major impact on the know-how panorama. As know-how evolves, the demand for software program engineering stays high and it doesn’t appear to be it’ll lower any time quickly. Software engineers have developed from constructing IT products to problem-solvers that address complicated business and social challenges and develop essential solutions. - What may appear to be a small change might really contain three to four developers, and cross into multiple departments. - Nurture your inner tech pro with personalized steerage from not one, however two industry specialists. - Generally talking, a software program engineer will collaborate with a designer and a product supervisor. - As know-how evolves, the demand for software program engineering stays high and it doesn’t look like it will lower any time soon. - Low-level access to reminiscence, the use of easy keywords, and a clean syntax makes C simple to make use of for such a task. - If you’re able to commit to the time and the work, it’s completely possible. We’ve accomplished our best to rent nice software program engineers so you can reap the benefits. When you might be developing a product, you are creating it for the end-user. Understanding the audience that may use the product and how they use it is important for making sure the product suits the user’s needs as well as enterprise objectives and requirements. Benefits Of One-on-one Conferences To do that, professional software engineers could have mastery of several programming languages, depending on which ones they like or which of them are most in-demand in the business. I lead the software program improvement for Air Detective, which offers you with near real-time air high quality evaluation. We additionally record professional organizations for the software engineering field. These organizations supply help, instructional information and other assets which may be useful to you as you construct and grow your career. A mobile developer is very different from a backend developer. And even inside cellular or backend, there are different sub-specialisations based mostly on programming languages or operating methods. While older generations often criticize the technology itself, human folks – also called software engineers – bear a lot of the duty for these tools. The world runs on it, thrives on it; it’s not going wherever, and it’ll solely turn into more prevalent. I want to be a half of that, to help create and form software engineer vs developer the longer term. When you place such a gaggle of motivated folks together with a typical aim, the technological development you’ll find a way to achieve is limitless. The best part is the ever-changing panorama of software program components available. Other languages within the repertoire of C developers may be higher-level languages and frameworks that work nicely with C like Java, Node.js, and Python. With structured programming, alternatively referred to as modular programming, code is readable and there’s leeway for reusable components, which most developers find useful. Of course, getting to know the true intentions of a developer ties back into these communication skills that you’ll get a glimpse of throughout the primary couple of interviews. But an trustworthy job description can weed out the worst apples. In the end, this balance will end in a high-quality product. Great software engineers purpose excessive but bear in mind to keep their toes on the ground. Software Program Engineer Job Description They use their expertise in programming languages, software improvement methodologies, and tools to construct and ship software merchandise that meet the wants of companies, organizations, or end-users. Software engineers are computer science professionals who use engineering ideas and programming languages to construct software program merchandise, develop web and cell applications, and run community control techniques. Software engineering is a branch of pc science that features creating software and laptop systems. Typically, writing code or programming is a big a half of the development course of. Through programming, software program engineers can design something from video games to working systems. Software engineering falls underneath the umbrella of pc science and refers to designing, building and maintaining software applications. To be taught extra, tell us about your project and we’ll get you began. Eastern Europe shares very comparable rates to South America, once more as a result of financial differences. When taking a glance at salaries in Eastern Europe, data shows that a Senior C Developer prices round $100,000 on average. Otherwise, we’d advocate you contact Trio for consulting and developer allocation. The library C presents is wealthy with built-in features and is furnished with dynamic memory allocation. What’s more, C has a lot much less library functions than other languages but just as many features, simplifying their deployment. What Is Software Engineering? As with many jobs, if you’re going to become a profitable programmer, you’re going to wish both some technical skills as properly as soft expertise to begin out your profession. If you’re a little circled by all of the varying job titles—don’t be! We’ve created a useful guide to differentiating software engineers from net builders. Software engineers analyze and design software systems, whereas builders lead and create the software program. If you want to master a programming language, you’ll should apply exterior the classroom. This will improve your capability to write down code and it provides you with extra confidence if you begin applying to jobs. CareerFoundry is an internet faculty for folks seeking to change to a rewarding career in tech. Select a program, get paired with an expert mentor and tutor, and become a job-ready designer, developer, or analyst from scratch, or your a reimbursement. If you want to keep competitive, your corporation must be challenged. Rather than putting an app on the app retailer, why not develop a complete software. In recursive programming, functions have the flexibility to call on themselves, whether or not directly or not directly. The utility of this feature is to break up a problem into smaller problems. A software engineer uses data of data, know-how and engineering to create innovative solutions for enterprise problems. Careers in software program development and engineering involve understanding complex issues and providing much more advanced solutions. Depending on the job, professionals can also spend lots of time alone at the computer; aspiring software engineers ought to be okay with a solitary work environment. As a software program engineering skilled, you could work from an workplace or from home, or you would possibly journey to a number of areas. At times, you could work alone at your pc all day writing and testing code or designing applications. On other days, you may meet with collaborators, clients or other stakeholders to identify challenges and determine solutions. Software engineering, also called software development, is the apply of designing, testing, and building applications for working techniques, hardware, and networks. Entry-level software engineers might take on quite so much of roles. While working with a group, they could give attention to the again end of software program development and construct the code structure, or on the entrance finish to guarantee that the person interface stays consistent. Those with a degree and expertise in software program engineering can discover totally different computing profession choices. Software engineers create web purposes, cellular apps, robots, operating techniques, and community systems. They develop software solutions that meet their firms’ wants and expectations. Note that the terms “software program engineer” and “software developer” are used interchangeably in the industry, however these positions’ responsibilities range slightly. Software builders could give consideration to particular forms of software program merchandise, similar to video video games, computer functions, database improvement, industrial software or consumer merchandise. Many companies use software program developer and software engineer interchangeably.
OPCFW_CODE
Mini-Workshop “Model Reduction and Control” Date: Tue. May 24, 2022 Organized by: FAU DCN-AvH, Chair for Dynamics, Control and Numerics – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany) – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany) Title: Mini-workshop ““Model Reduction and Control” Zoom meeting link Meeting ID: 669 1145 2581 | PIN: 969890 Sebastian Peitz, Paderborn University (Universität Paderborn) “Efficient data-driven prediction and control of complex systems via the Koopman operator” Abstract. As in almost every other branch of science, the advances in data science and machine learning have also resulted in improved modeling and simulation of nonlinear dynamical systems, prominent examples being autonomous driving or the control of complex chemical processes. In many cases, data-driven methods are advertised to ultimately be useful for control. However, the question of how to use a predictive model for control is left unanswered in many cases due to a higher system complexity, the requirement of larger data sets and an increased and often problem-specific modeling effort. In this presentation, we discuss how to realize the transition from prediction to control for data-driven models in a data-efficient manner. To this end, we will particularly focus on the Koopman operator, which is a linear yet infinite-dimensional operator describing the dynamics of observables. Due to its linearity, it has been a tool of particular interest in the dynamical systems community in recent years, as it allows us to apply tools from linear systems to nonlinear ones if we can find a suitable finite-dimensional approximation. We will show that this is also the case in the control setting, and demonstrate the performance using several example systems governed by partial differential equations. Andrea Manzoni, Politecnico di Milano, MOX “Reduced order modeling for optimal control, deep learning for reduced order modeling“ Abstract. This talk is made by two (almost independent) parts, addressing recent results in the framework of optimal control and reduced order modeling (ROM) for parametrized PDE problems. In the first part, a reduced basis method is considered to efficiently achieve thermal cloaking from a computational standpoint in several virtual scenarios by controlling a distribution of active heat sources. We frame this problem in the setting of PDE-constrained optimization, where the reference field is the solution of the time-dependent heat equation in the absence of the object to cloak. The optimal control problem then aims at actuating the space-time control field so that the thermal field outside the obstacle is indistinguishable from the reference field. In the second part, recent achievements dealing with deep learning (DL)-based reduced order models are presented. In the ROM framework, DL algorithms aim at overcoming the traditional bottlenecks arising when dealing with nonlinear time-dependent parametrized PDEs, such as (i) the need to deal with projections onto high dimensional linear approximating trial manifolds, (ii) expensive hyper-reduction strategies, or (iii) the intrinsic difficulty to handle physical complexity with a linear superimposition of modes. In the resulting DL-ROMs, both the trial manifold and the reduced dynamics are learned in a non-intrusive way, by relying on deep (e.g., feedforward, convolutional, autoencoder) neural networks trained on a set of FOM solutions obtained for different parameter values, thus yielding the possibility to simulate in real-time complex physical phenomena. Maria Strazzullo, Politecnico di Torino, DISMA “Model order reduction for time-dependent parametrized optimal control problems” Abstract. Time-dependent parametrized optimal control problems are of utmost importance in many applications where one wants to fill the gap between collected data and partial differential equations. Despite their indisputable usefulness, their computational complexity still limits their applicability when the setting requires many evaluations of the problem for a comprehensive parametric analysis. To tackle this issue, we employ reduced order methods. Indeed, they describe the parametric nature of the system in a low-dimensional framework to accelerate the system solution, still being accurate. This talk focuses on reduced strategies for time-dependent problems. We discuss two algorithms: a space-time POD algorithm validated on nonlinear equations used in coastal environmental applications and a space-time greedy with a new error estimation for parabolic systems. Finally, we show some examples of potential applications in several scientific fields, from bifurcating phenomena to numerical stabilization. Previous FAU DCN-AvH Workshops: - Seminar Series: Deep Learning in Control by Heiland (January 17th, 2022) - Mini-workshop: “Recent Advances in Analysis and Control” by Lazar, Zamorano, Lecaros (January 14th, 2022) - Mini-workshop: “Recent Advances in Analysis and Control” by Ftouhi, Rodríguez, Song, Matabuena (October 1st, 2021) - Mini-workshop: “Recent Advances in Analysis and Control” (II) by Sônego, Minh Binh Tran (May 21th, 2021) - Mini-workshop: “Recent Advances in Analysis and Control” by Della Pietra, Wöhrer, Meinlschmidt (April 30th, 2021) If you like this, you don’t want to miss out our upcoming events!
OPCFW_CODE
I am a postdoc, applying for tenure track jobs in mathematics. I did my bachelor degree as a kid (was 14 when started). I wonder if I should/could mention this in my academic CV? It is your CV, you can do what you like with it.– LouicAug 17, 2017 at 8:46 2Of course I can do what I want, my question is it is advisible to do that.– YoungAug 17, 2017 at 8:46 That all depends on who is reading it. The answer is simple: if you are proud of it, mention it. If not, don't. There are no rules for a CV: it is your personal CV so do what you like.– LouicAug 17, 2017 at 8:49 I'm not sure I understand the question. Do you mean that you did high school and college concurrently?– Pete L. ClarkAug 17, 2017 at 12:05 5I wouldn't do this in the US, for the simple reason that age discrimination is illegal. Formally, I'm not allowed to care how old you are, or how old you were when you got whatever degree; I'm only allowed to care about what you've done.– JeffEAug 17, 2017 at 13:52 In my academic CV, I have included all my university education, which naturally includes my undergraduate degree. I don't think it makes any difference that you started when you were 14. You came out with the same piece of paper as everyone else in the end up. I am fairly sure that the question is about whether to just mention the BSc, or whether to also mention that it was obtained in parallel to attending highschool.– ArnoAug 17, 2017 at 12:40 It possibly depends on where you are sending the CV. For a research position, it clearly indicates intelligence and thus probably would help or at least not hurt (as long as you don't overplay it). For more of a teaching job (e.g. at a 4-year college) I could see how it might potentially hurt: if you didn't have the typical high school math background you might have difficulty teaching students who have just come from that (with all of the gaps and weaknesses such a background so often entails). Perhaps you can prepare two different versions of your CV, one that contains the parenthetical (started at age 14) and one that doesn't. For each job you apply to, make a judgment call about which one to send, taking into account both the job description and the nature of the school. I was in a very similar situation: Started my BSc with 13, got it with 16 and was looking for a faculty level position in Math or CS [plenty of other things happened in between]. My CV does mention this. It never came up during an interview, or informal feedback, neither in a positive or negative way. I got a position. Of course, this is just a single data point, but maybe it helps. This is a good thing though. It is indeed an achievement in starting Bachelor study at the age of 13 (or 14). In my country, it is not allowed. It should definitely be mentioned. I don't know how much it is normal it is in other countries.– CoderAug 17, 2017 at 11:35
OPCFW_CODE
BUG: ScalerCH.select_channels() selects channel with no name text In https://github.com/BCDA-APS/use_bluesky/issues/56, the name field of the first channel of the scaler was left blank in EPICS. The select_channels() code failed to handle this properly. Even when that name was set after the bluesky session started, select_channels() code still failed to handle this properly. If the channel had a name before bluesky connected to the scaler, then select_channels() code works properly. Also, now is a good time to make the chan_names argument into a kwarg with default of None or []. Do that enhancement in a separate issue (#870). Further investigation with ophyd 1.5.1 test program #!/usr/bin/env python import ophyd print(f"ophyd version: {ophyd.__version__}") from ophyd.scaler import ScalerCH from apstools.utils import device_read2table scaler = ScalerCH("sky:scaler1", name="scaler") scaler.wait_for_connection() def procedure(text): scaler.channels.chan01.chname.put(text) scaler.select_channels(None) scaler.trigger() device_read2table(scaler) procedure("") procedure("clock") output (bluesky_2020_5) mintadmin@mint-vm:~$ ./scaler_869.py ophyd version: 1.5.1 =========== ===== ========================== name value timestamp =========== ===== ========================== 0.0 2020-07-20 23:40:59.445615 I0 0.0 2020-07-20 23:40:59.445615 scint 0.0 2020-07-20 23:40:59.445615 diode 0.0 2020-07-20 23:40:59.445615 I0Mon 0.0 2020-07-20 23:40:59.445615 ROI1 0.0 2020-07-20 23:40:59.445615 ROI2 0.0 2020-07-20 23:40:59.445615 scaler_time 0.0 2020-07-20 23:40:59.445615 =========== ===== ========================== =========== ===== ========================== name value timestamp =========== ===== ========================== clock 0.0 2020-07-20 23:41:22.215437 I0 0.0 2020-07-20 23:41:22.215437 scint 0.0 2020-07-20 23:41:22.215437 diode 0.0 2020-07-20 23:41:22.215437 I0Mon 0.0 2020-07-20 23:41:22.215437 ROI1 0.0 2020-07-20 23:41:22.215437 ROI2 0.0 2020-07-20 23:41:22.215437 scaler_time 0.0 2020-07-20 23:41:22.215437 =========== ===== ========================== Summary, when the first channel name is blank in EPICS, then select_channels() fails to remove that channel. When it is set, select_channels() works properly. Note if the second channel name is also blank, it is removed each time. Only the first channel name has this problem. Since scaler_time is always reported, then this line is not needed: https://github.com/bluesky/ophyd/blob/09cc0c512ccd9c6e45587e071b871905bd63fd7f/ophyd/scaler.py#L145 replace it with: read_attrs = []
GITHUB_ARCHIVE
M: Static Hosting with Amazon S3 and Cloudfront - By-Jokese https://byjokese.com/blog/Static-Hosting-with-Amazon-S3-and-Cloudfront.html R: wilkystyle I'm using Hugo [0], and this is how I host my site. I have a small bash script that will generate the static files, push to S3, and issue a CloudFront invalidation. Also, my bill is ~0.60 USD/month. [0] [https://gohugo.io](https://gohugo.io) R: mjlee I have done the exact same thing with a Makefile :) [https://blog.martinlee.org/posts/how-its-made-pt2-or-hugo- in...](https://blog.martinlee.org/posts/how-its-made-pt2-or-hugo-in-s3-and- cloudfront/) R: zackbloom Be sure to check out Stout as well, we use it to deploy our static apps and it works and does all the config for you: [http://stout.is](http://stout.is) R: fibo I also use S3, add also a bucket named www.example.com that redirects to example.com bucket.
HACKER_NEWS
import numpy as np from sklearn import preprocessing import pandas as pd def feature_categorical_conversion(lst_data): """ Convert categorical data in numerical from a list of dataset :param lst_data: list of related dataset to convert in numerical :return: list of converted data sets and dictionary of conversion categorical in numerical """ dict_categorical_features = dict() features_all = lst_data[0].columns.values.tolist() for feature in features_all: dict_values = dict() lst_values = [] for data in lst_data: df_data_feature = data[feature] if df_data_feature.dtype == object: # To convert into numbers # Create a dictionary between factors an numbers data[feature].replace(np.NaN, 'Unknown', inplace=True) # Get unique values and added to list of values if not exists and in dictionary lst_values_data = data[feature].unique() lst_values_added = [item for item in lst_values_data if item not in lst_values] lst_values.extend(lst_values_added) for value in lst_values_added: dict_values[value] = len(dict_values) # Convert categorical data to number data[feature].replace(dict_values, inplace=True) if len(lst_values) > 0: dict_values = dict() for value in lst_values: dict_values[value] = len(dict_values) dict_categorical_features[feature] = dict_values return lst_data, dict_categorical_features def feature_combination(df_train, df_cross_validation, df_normal, df_anomaly): new_data_train = pd.DataFrame() new_data_cross_validation = pd.DataFrame() new_data_normal = pd.DataFrame() new_data_anomaly = pd.DataFrame() for i in range(len(df_train.columns)): for j in range(i+1, len(df_train.columns)): column_name = "C(" + str(i) + "," + str(j) + ")" data_train_product = df_train.ix[:, i] * df_train.ix[:, j] data_cross_validation_product = \ df_cross_validation.ix[:, i] * df_cross_validation.ix[:, j] if len(pd.Series.unique(data_train_product)) != 1 and \ len(pd.Series.unique(data_cross_validation_product)) != 1: new_data_train[column_name] = df_train.ix[:, i] * df_train.ix[:, j] new_data_cross_validation[column_name] = \ df_cross_validation.ix[:, i] * df_cross_validation.ix[:, j] new_data_normal[column_name] = df_normal.ix[:, i] * df_normal.ix[:, j] new_data_anomaly[column_name] = df_anomaly.ix[:, i] * df_anomaly.ix[:, j] data_train_result = pd.concat([df_train, new_data_train], axis=1) data_cross_validation_result = pd.concat([df_cross_validation, new_data_cross_validation], axis=1) data_normal_result = pd.concat([df_normal, new_data_normal], axis=1) data_anomaly_result = pd.concat([df_anomaly, new_data_anomaly], axis=1) return data_train_result, data_cross_validation_result, data_normal_result, data_anomaly_result def feature_normalization(data, type_normalization): """ Normalise range data according selected type (std, minmax, average mean) :param data: data to normalise :param type_normalization: type of normalization :return: """ if type_normalization == "std": std_scale = preprocessing.StandardScaler().fit(data) data_norm = std_scale.transform(data) elif type_normalization == "minmax": std_scale = preprocessing.MinMaxScaler().fit(data) data_norm = std_scale.transform(data) else: mean = np.mean(data, axis=0) data_norm = data - mean std = np.std(data_norm, axis=0) data_norm = data_norm / std return data_norm def set_gaussian_shape(df_data, df_feature, factor_to_multiply, funct_to_apply, *args): """ This function try to make feautre distribution loos like a gaussian shape to use Bayes theorem :param df_data: data frame with data :param df_feature: feature of data frame to modify :param factor_to_multiply: factor to modify if necessary :param funct_to_apply:function to apply in order to modify original shape of feature(log, power,...) :param args: arguments of function to apply if is necessary :return: data frame with value from feature selected modified """ df_data[df_feature] = factor_to_multiply * funct_to_apply(df_data[df_feature], args)
STACK_EDU
from functools import update_wrapper from inspect import isgeneratorfunction, signature import io def should_be_file(file_argname, mode="rb"): """Decorator to enforce the given argument is a file-like object. If it isn't, it will be seen as a filename, and it'll be replaced by the open file with the given mode. """ def decorator(func): if isgeneratorfunction(func): def wrapper(*args, **kwargs): bound_args = signature(func).bind(*args, **kwargs) file_arg = bound_args.arguments[file_argname] if hasattr(file_arg, "read"): # Already a file, nothing to do yield from func(*args, **kwargs) return with open(file_arg, mode) as file_obj: # "Cast" name to file bound_args.arguments[file_argname] = file_obj yield from func(*bound_args.args, **bound_args.kwargs) else: raise NotImplementedError return update_wrapper(wrapper, func) return decorator class LineSplitError(Exception): pass class LineSplittedBytesStreamWrapper: def __init__(self, substream, line_len, newline): self.substream = substream self.line_len = line_len self.newline = newline self.rnext_eol = line_len self.writing = False def _check_eol(self): if self.substream.read(len(self.newline)) != self.newline: raise LineSplitError("Invalid record line splitting") def read(self, count=None): result = [] remaining = float("inf") if count is None else count while remaining > 0: expected_len = min(self.rnext_eol, remaining) data = self.substream.read(expected_len) data_len = len(data) result.append(data) remaining -= data_len if self.rnext_eol == data_len: self._check_eol() self.rnext_eol = self.line_len else: self.rnext_eol -= data_len break return b"".join(result) def write(self, data): self.writing = True result = remaining = len(data) while data: buff_len = min(self.rnext_eol, remaining) buff, data = (data[:buff_len], data[buff_len:]) self.substream.write(buff) remaining -= buff_len if self.rnext_eol == buff_len: self.substream.write(self.newline) self.rnext_eol = self.line_len else: self.rnext_eol -= buff_len break return result def close(self): if self.rnext_eol != self.line_len: if self.writing: self.substream.write(self.newline) else: self._check_eol() self.substream = None tellable = lambda self: self.substream.tellable() def tell(self): line_no, col_no = divmod(self.substream.tell(), self.line_len + len(self.newline)) return line_no * self.line_len + col_no seekable = lambda self: self.substream.seekable() def seek(self, offset, whence=io.SEEK_SET): if whence == io.SEEK_SET: if offset < 0: raise ValueError("Negative offset") line_no, col_no = divmod(offset, self.line_len) line_start = line_no * (self.line_len + len(self.newline)) self.substream.seek(line_start, whence) self.rnext_eol = self.line_len self.read(col_no) return offset elif whence == io.SEEK_CUR: return self.seek(self.substream.tell() + offset, io.SEEK_SET) elif whence == io.SEEK_END: if not self.finished: self.read() # Just to reach the end of stream return self.seek(max(0, self.substream.tell() + offset), io.SEEK_SET) raise ValueError("Invalid whence") class TightBufferReadOnlyBytesStreamWrapper: def __init__(self, substream): self.substream = substream self.buffer = b"" self.offset = 0 self.finished = False def read(self, size=None): if size is None or size < 0: self.buffer += self.substream.read() result = self.buffer[self.offset:] self.finished = True else: expected_offset = self.offset + size missing = expected_offset - len(self.buffer) if missing > 0: self.buffer += self.substream.read(missing) if len(self.buffer) < expected_offset: self.finished = True result = self.buffer[self.offset:expected_offset] self.offset += len(result) return result close = lambda self: None # Required to be a file-like object tellable = lambda self: True tell = lambda self: self.offset seekable = lambda self: True def seek(self, offset, whence=io.SEEK_SET): if whence == io.SEEK_SET: if offset < 0: raise ValueError("Negative offset") self.offset = offset elif whence == io.SEEK_CUR: self.offset = max(0, self.offset + offset) elif whence == io.SEEK_END: if not self.finished: self.read() # Just to reach the end of stream self.offset = max(0, len(self.buffer) + offset) else: raise ValueError("Invalid whence") return self.offset
STACK_EDU
A VPP token is only supported for use on one Intune account at a time. When you assign volume-purchased apps to a device, the end user of the device does not have to supply an Apple ID to access the store. For each group you selected, choose the following settings: For more information, see How to manage iOS eBooks you purchased through a volume-purchase program. If you previously used a VPP token with a different product, you must generate a new one to use with Intune. After you have imported the VPP token to Intune, do not import the same token to any other device management solution. VPP for Business apps purchased by the end user will sync to their Intune tenants. The app can be run on multiple devices that the user owns with a limit controlled by Apple. The iOS app store lets you purchase multiple licenses for an app that you want to run in your company. When you assign a volume-purchased app to users, each end user must have a valid and unique Apple ID in order to access the app store. If you have an app that is associated with multiple VPP tokens, you see the same app being displayed multiple times; once for each token. Automatic app updates work for both device and user licensed apps for iOS Version Intune does not synchronize those user accounts into Intune as a security measure. Using DEP, you can configure enterprise devices without touching them. The following table explains each condition: You can synchronize the data held by Apple with Intune at any time by choosing Sync now. When you assign an app to devices, one app license is used, and remains associated with the device to which you assigned it. Ensure that when you set up a device for a new Intune user, you configure it with that user's unique Apple ID or email address. You can start a manual sync at any time. Additionally, you should understand the following criteria: The Apple ID or email address and Intune user form a unique pair and can be used on up to five devices. When you assign volume-purchased apps to a device, the end user of the device does not have to supply an Apple ID to access the store. License type - Choose from User licensing, or Device licensing. The list of apps displayed is associated with a token. Microsoft Intune helps you manage multiple copies of apps purchased through this program by: When you are done, select Create. Additionally, you can synchronize, manage, and assign books you purchased from the Apple volume-purchase program VPP store with Intune. Changing the country will update the apps metadata and store URL on next sync with the Apple service for apps created with this token. On the Create VPP token pane, specify the following information: You can associate multiple VPP tokens with your Intune account. In addition, third-party developers can also privately distribute apps to authorized Volume Purchase Program for Business members specified in iTunes Connect. The Apple ID or email address and Intune user form a unique pair and can used on up to five devices. Changing the apple itunes store support number will saucepan the apps metadata and ring URL on next jesus with the File closure for caballeros met with this token. For you tout volume-purchased apps to a for, the end user of the el custodes not have to passion an Medico ID to tout the passion. Solo that when you set up a passion for a new Intune group, you blame it with that custodes solo No ID or email favour. You can file the data held by Si with Intune at any ring by choosing Sync now. If you singly used a VPP apple itunes store support number with a happy jesus, you must wrong a new one what attracts scorpio use with Intune. Intent app elements - File from On to Off to inveigle automatic updates. The a table explains each jingle: The iOS app passion lets you in north no for an app that you medico to run in your in. For more information, see How to wrong iOS eBooks you met through a volume-purchase conflict. For each fault you selected, carry the following jesus: You can closure a midpoint sync at any sol.
OPCFW_CODE
If you, as a system administrator, are in charge of managing not only servers but also your company’s IT assets, you will need to monitor their status as well as their physical location. Additionally, you must to be able to report the current occupation and utilization percentage of your datacenter. Having this information handy is essential before planning new implementations or adding new equipment to your environment, and is as valid for small and medium-sized server rooms as for the classic datacenter and the cloud. In this article we will explain how to install and use RackTables, a web-based datacenter management system in CentOS/RHEL 7, Fedora 23-24 and Debian/Ubuntu systems, that will help you to document your hardware assets, network addresses and configuration, and physical space available in racks, among other things. Also, you can try out this software through a demo version in the project’s website in order to examine it before proceeding. We are sure you will love it! In CentOS 7, although RackTables is available from the EPEL repository, we will install it by downloading the tarball with the installation files from the project’s website. We will choose this approach in CentOS instead of downloading the program from the repositories to simplify and unify the installation on both distributions. Our initial environment consists of a CentOS 7 server with IP 192.168.0.29 where we will install RackTables. We will later add other machines as part of our assets to be managed. Step 1: Installing LAMP Stack 1. Basically, RackTables requires a LAMP stack to operate: -------------- On CentOS and RHEL 7 -------------- # yum install httpd mariadb php -------------- On Fedora 24 and 23 -------------- # dnf install httpd mariadb php -------------- On Debian and Ubuntu -------------- # aptitude install apache2 mariadb-server mariadb-client php5 2. Don’t forget to start the web and database servers: # systemctl start httpd # systemctl start mariadb # systemctl enable httpd # systemctl enable mariadb By default, the web and database servers should be started by default. If not, use the same systemd-based commands to do it yourself. Also, run the mysql_secure_installation to secure your database server. Step 2: Download RackTables Tarball 3. Finally, download the tarball with the installation files, untar it, and perform the following steps. The latest stable version at the time of this writing (early July 2016) is 0.20.11: # wget https://sourceforge.net/projects/racktables/files/RackTables-0.20.11.tar.gz # tar xzvf RackTables-0.20.11.tar.gz # mkdir /var/www/html/racktables # cp -r RackTables-0.20.11/wwwroot /var/www/html/racktables Now we can proceed with the actual RackTables installation in Linux, which we will cover in the next section. Step 3: Install RackTables in Linux The following actions need to be performed only after the above steps have been completed. 4. Launch a web browser and go to http://192.168.0.29/racktables/wwwroot/?module=installer (don’t forget the change the IP address or use a specific hostname instead). Next, click Proceed: 5. If some items are missing from the checklist that follows, return to the command line and install the necessary packages. In this case we will ignore the HTTPS message to simplify our setup, but you are strongly encouraged to use it if you are considering to deploy RackTables in a production environment. We will also ignore the other items inside yellow cells as they are not strictly required to make RackTables work. Once we have installed the following packages, and restarted Apache we will refresh the above screen and all tests should show as passed: # yum install php-mysql php-pdo php-mbstring Important: If you do not restart Apache, you will not be able to see the changes even if you click on Retry. 6. Make the configuration file writeable by the web server and disable SELinux during the installation: # touch /var/www/html/racktables/wwwroot/inc/secret.php # chmod 666 /var/www/html/racktables/wwwroot/inc/secret.php # setenforce 0 Step 4: Create RackTables Database 7. Next, open a MariaDB shell with: # mysql -u root -p Important: Enter the password assigned to the root MariaDB user when you executed mysql_secure_installation command. and create the database and grant the necessary permissions to the racktables_user (replace MY_SECRET_PASSWORD with one of your choosing): CREATE DATABASE racktables_db CHARACTER SET utf8 COLLATE utf8_general_ci; GRANT ALL PRIVILEGES ON racktables_db.* TO racktables_user@localhost IDENTIFIED BY 'MY_SECRET_PASSWORD'; FLUSH PRIVILEGES; Then click Retry. Step 5: Setup RackTables Setup 8. Now it’s time to set the right ownership and minimum permissions for the # chown apache:apache /var/www/html/racktables/wwwroot/inc/secret.php # chmod 400 /var/www/html/racktables/wwwroot/inc/secret.php 9. After clicking Retry in the previous step, the database will be initialized: 10. You will be prompted to enter a password for the RackTables administrative account. You will use this password to login to the web-based interface in the next step. 11. If everything goes as expected, the installation should now be complete: When you click Proceed, you will be prompted to login. Enter admin as username and the password you chose in the previous step for the administrative account. You will then be taken to the RackTables main user interface: 12. To access the UI more easily in the future, you may consider adding a symbolic link that points to the wwwroot directory in /var/www/html/racktables: # ln -s /var/www/html/racktables/wwwroot/index.php /var/www/html/racktables/index.php Then you will be able to login via http://192.168.0.29/racktables. Otherwise, you will need to use 13. One final adjustment you may want to make is replacing MyCompanyName (upper left corner) with the name of your company. To do that, click on RackTables Administrator (upper right corner) and then on the Quick links tab. Next, make sure Configuration is checked and save changes by clicking on the icon with the blue arrow pointing to the disk at the bottom of the screen. Finally, click on the newly-added Configuration link at the top of the screen, then click User interface and Change: We are now ready to add equipment and other data to our asset management system. Step 6: Adding RackTables Equipment and Data 14. When you first login to the UI, you will see the following self-explanatory asset and miscellaneous categories: - IPv4 space - IPv6 space - IP SLB - Log records - Virtual resources - Patch cables Feel free to click on them and spend some time to become familiar with RackTables. Most of the above categories have two or more tabs where you can view a summary of the inventory and add other items. In addition, you can refer to the following resources for more information: - Wiki: https://wiki.racktables.org/index.php/Main_Page - Mailing list: http://www.freelists.org/list/racktables-users After completing the RackTables installation, you can re-enable SELinux using: # setenforce 1 Step 7: Logging out RackTables Session 15. To log out from your current user session in RackTables, you will need to add the else statement below in /var/www/html/racktables/wwwroot/inc/interface.php inside the function showLogoutURL () if ($dirname != '/') $dirname .= '/'; else $dirname .= 'racktables'; Then restart Apache. When you click on logout (upper right corner), another login box will appear. Dismiss it by clicking Cancel and your session will be terminated. To log on again and pick up where you left off, click the Back button in your browser and login with your usual credentials. In this article we have explained how to set up RackTables, an asset management system for your IT inventory. Don’t hesitate to let us know if you have any questions about or suggestions to improve this article. Feel free to use the comment form below to reach us anytime. We look forward to hearing from you!
OPCFW_CODE
I am having issues with MySQL all of the sudden today. - OS: CentOS release 5.7 - Server type: Parallels virtuozzo container running on mediatemple DV 4.0 package - Average total memory usage: <500mb - Total memory usage allowed: 1gb (part of shared pool for emergency only, users are only guaranteed 500mb) - Processor: >1ghz - Main database sizes with most usage: 275mb & 107mb - server stack: nginx 1.0.10, mysql 5.1.54, php 5.3.8 with php-fpm - php-fpm max children: 5 - Webapps: custom php-based sites, magento & drupal - slow query timeout is set to 1 second Steps I completed towards diagnosis: - Cannot restart container yet - I will try later tonight when our domestic traffic has dropped - Enabled mysql and php-fpm slowlog. - Found functions that did DB queries in php-fpm slowlog were taking over 1s to complete at times - Found some simple queries in mysql slowlog taking well over 1s to complete that should take less than 1s. - Most interesting - execution time seems to spike at times. A query will take .2s a couple times, then one time it will take 8s to run the same query. These results were verified by running raw SQL queries through mysql command line. - Top does not reveal anything too interesting - Only resource related thing i can see is load averages much higher than normal - Up until today, mysql has been fine, there have been no major changes to the db since yesterday. - Sometimes things are so bad, I am seeing bad gateway errors after 60s of execution time. - Innodb is doing on average 300-1400 reads/sec. - Mysql is doing 3-10 queries/sec - slow query count in 2 hours uptime is 171 (with slow timeout at 1 second) - Tried restarting mysql, nginx, php-fpm multiple times UPDATE `catalogsearch_query` SET `query_text` = 'EW 90', `num_results` = '7532', `popularity` = '99180', `redirect` = NULL, `synonym_for` = NULL, `store_id` = '1', `display_in_terms` = '1', `is_active` = '1', `is_processed` = '1', `updated_at` = '2012-05-08 21:38:31' WHERE (query_id='31'); This query took 17sec to complete one time, rest of the time around .079 sec. But varies, sometimes 1sec, sometimes .004 sec. This is running the same query, over and over with a couple seconds time in between each. Most tables are innodb, and sometimes I noticed the lock time taking 90% of the query execution time, but most of the time lock time is insignificant. Any idea what's going on here?
OPCFW_CODE
as the initial author of TIM I would like to tell something about the idea behind TIM. TIM originally wasn't ever intended to be a rom manager or being released for public use. At these times Grendel set release dates for dats to be released to public. Renamers delivered their dats to Grendel where he was going to package everything for the release. It was always alot of work to get this done, because not everybody too care about the headers of the dats, naming of the dat files and so on. So once I coded a little tool for Grendel which made the process of this a bit easier. Grendel somehow liked the idea and came up with alot of ideas to have a tool for renamers which checks the dat files for validity of the TOSEC standard. More and more ideas came up till TIM ended up in a renaming and dat creation tool for renamers. The intention was, that TIM only creates dat files which where valid by TNG, and this was the point where problems raised up. We saw that alot of dats had alot of errors regarding the naming standard by TNG. It was a pin in the a** to took care of the many ways files where named in different ways. At the same time Grendel wanted to go further than "just" releasing dats. He wanted to set up a website which gives the ability to browse through all the files from the dats and have the ability to include additinal information like screenshots, box art, manuals etc.... Also it should had been possible to generate dat files on the fly from what currently is in the db. So quickly the idea behind TIM evolved to have a tool which makes it possible to the renamers to submit updates online through TIM to the online db, and users being able to get this updates anytime when using TIM. Also everybody should had been given the ability to submit any kind of additional information to the online db. So we started to work on the Rommanager part. Unluckily it was just me who had to take care of TIM and the online db and website stuff. I have to agree that ideas what TIM and all the rest should be able to do was simply too much for one person to take care of. More and more things got added but this is what made the whole code like a whole mess, as the initial idea was something completely different. We tried to add features and functionally to a tool which wasn't "designed" to do things like this. I agree that many great ideas came up by Grendel and other people, but it was simply too much to get things directed to a good way. At the end it was just adding some code that just does what we wanted to do, but we weren't able anymore to get things to the right way.... It ended up that because of RL stuff I had to give up anything related to it and somehow lost contact to grendel. Anyway, it was a nice experience and I learned to know alot of nice people involved to TOSEC (idoru is still alive and active?) By "accident" I stumbled across the new TOSEC home and it's nice to see that some activities are still going on. Also nice to see that still some people active I remember from the past.... BTW, is still somebody taking care of the CPC and C64 dats? That had been the dats I initially got involved to TOSEC as a renamer and they still somehow draw the point of my attention
OPCFW_CODE
I wrote a commentary that made a very simple point. A published model assumed that the variance of z-scores is typically less than 1. I pointed out that this is not a reasonable assumption because the standard deviation of z-scores is at least one and often greater than 1, when studies vary in effect sizes, sample sizes, or both. This commentary was rejected. One reviewer even provided R-Code to make his or her case. Here is my rebuttal. Here is the r-code provided by the reviewer. We see SDs of 0.59, 0.49 and 0.46. Based on these results, the reviewer thinks that setting a prior to a range of values between 0 and 1 is reasonable. Let’s focus on the example that the reviewer claims is realistic for a p-value distribution for 80% power. The reviewer simulates this scenario with a beta distribution with shape parameters 1 and 31. The Figure shows the implied distribution of p-values. What is most notable is that p-values greater than .38 are entirely missing; the maximum p-value is .38. In this figure 80% of p-values are below .05 and 20% are above .05. This is why the reviewer suggests that the pattern of observed p-values corresponds to a set of studies with 80% power. However, the reviewer does not consider whether this distribution of p-values could arise from a set of studies where p-values are the result of the non-central parameter and sampling error that follows a sampling distribution. To simulate studies with 80% power, we can simply use a standard normal distribution centered over 2.80. Sampling error will produce z-scores greater and smaller than the non-centrality parameter of 2.80. Moreover, we already know that the standard deviation of these tests statistics is 1 because z-scores have the standard normal distribution as a sampling distribution (a point made and ignored by the reviewers and editor). We can know compute the two-tailed p-values for each z-test and plot the distribution of p-values. Figure 2 shows the actual distribution in black and the reviewer’s beta distribution in red. It is visible that the actual distribution has a lot more p-values that are very close to zero, which corresponds to high z-scores. We can know transform the p-values into z-scores using the reviewers’ formula (for one-tailed tests). We see that the standard deviation of these z-scores is greater than 1. Using the correct formula for two-tailed p-values, we of course get the result that we already know to be true. y = -qnorm(p/2) It should be obvious that the reviewer made a mistake by assuming we can simulate p-value distributions with any beta-distribution. P-values cannot assume any distribution because the actual distribution of p-values is a function of the properties of the distribution of test-statistics that are used to compute p-values. With z-scores as test statistics it is well-known from intro statistics that sampling error follows a standard normal distribution, which is a normal distribution with a standard deviation of 1. Any transformation of z-scores into p-values and back into z-scores does not alter the standard deviation. Thus, the standard deviation has to be at least 1. Heterogeneity in Power The previous example assumed that all studies have the same amount of power. Allowing for heterogeneity in power, will further increase the standard deviation of z-scores. This is illustrated with the next example, where mean power is again 80%, but this time the non-centrality parameters vary with a normal distribution centered over 3.15 and a standard deviation of 1. Figure 3 shows the distribution of p-values which is even more extreme and deviates even more from the simulated beta-distribution by the reviewer. Using the reviewer’s formula, we now get a standard deviation of 1.54, but if we use the correct formula for two-tailed p-values, we end up with 1.41. y = -qnorm(p/2) This value makes sense because we simulated variation in z-scores with two standard normal distributions. One for the variation in the non-centrality parameters and one for the variation in sampling error. Adding two variances, gives a joint variance of 1 + 2 = 2, and a standard deviation of sqrt(2) = 1.41. Unless I am totally crazy, I have demonstrated that we can use simple intro stats knowledge to realize that the standard deviation of p-values converted into z-scores has to be at least 1 because sampling error alone produces a standard deviation of 1. If the set of studies is heterogeneous and power varies across studies, the standard deviation will be even greater than 1. A variance less than 1 is only expected in unrealistic simulations or when researchers use questionable research practices, which reduces variability in p-values (e.g., all p-values greater than .05 are missing) and therewith also the variability in z-scores. A broader conclusion is that the traditional publishing model in psychology is broken. Closed peer-review is too slow and unreliable to ensure quality control. Neither the editor of a prestigious journal, nor four reviewers were able to follow this simple line of argument. Open review is the only way forward. I guess I will be submitting this work to a journal with open reviews, where reviewers’ reputation is on the line and they have to think twice before they criticize a manuscript.
OPCFW_CODE
Developing good UX for handling configurations of multiple objects I am developing a dashboarding system that lets users drag and drop charts of interest. Some of our users may want to have 4-7 charts on a given page. For each chart, there are certain parameters that are needed to configure the chart (ie: use this data source, group by this factor, etc). A user will select a value for that parameter to configure the chart. A problem we have is there are over 90 parameters for all charts we support, where each chart only uses about 5 or so. Some parameters are used in nearly all charts. Some parameters are used in only one or two charts. Like this: I am trying to develop a good UX for allowing users to quickly select the parameter values that they want. I want to allow users to specify parameters for specific charts, but also provide alternative ways to specify a parameter across the dashboard (such that the user doesn't have to select the same parameter value 7 times). I am currently considering the following UI: Users select which parameters to "promote to global parameters". This offers the benefit of providing the users the ability to directly choose what parameters are easy for them to add. It's downsides are that it's clunky- promote buttons right next to every parameter is visually a lot. I'm also worried that someone might be confused why they can't edit the parameters directly on the charts any more. It's also possible that people will promote too many parameters to global level making it a drop down hell. I have also been considering a UI where the developers designate certain common parameters (ones that are relevant on 85% of charts) to automatically be promoted to global level. When you add a chart that has one of those parameters, it will automatically be added to the page. The idea that you are proposing does make sense visually. However, there are certain things that we can take care in a better way are: Main view as preview: From our understanding, the user is constructing a dashboard and setting the parameters will help them build the chart. How about not disturbing the main view with setting and all on this page? Let the main view act as a preview once they run the parameter? Settings at one place: Instead of splitting the local and global settings at two different places, can we keep them in one place? IA wise they are part of settings. In the future, probably you can consider saving the settings feature to quickly build things. If we consider 01 and o2, this is how I will go about this. Main view: Settings: 3.Interaction in global settings: When adding the global parameter, inform the user how this gonna affect other charts. Global settings can be collapsable. If we are giving "Add global parameter", probably we can get rid of providing the users' option to mark any global parameter as local and vice versa If you still want to have it, we can give this option Showing the option on hover Similarly, for local parameters, the possible action set could be Similar to the global parameter, we can show add local parameter and show them the list. Having a search bar or not depends on the number of parameters that each chart will usually have. Similar to the above point 3.2, the interaction can be something similar Hope this helps. Thank you for this thorough response! You did a great job understanding the design problem. A followup question I have is about the opting for putting the local parameters in the side bar. I have heard about direct manipulation being a interaction paradigm that is user friendly as it makes the user feel that they are interacting with the actual object. I put the parameter editing experience on the charts because of that (behind a 'edit chart' button). Is there a reason you opted to move the local parameters to the side bar? Understood. The main reasons were to put information pieces as a group. The second reason was we never know how many parameters the user would be adding, hence space constraint would come into the picture. Having sidebar solves both the issues.
STACK_EXCHANGE
Current EUL liquidity on Uniswap is very low, which brings potential risks to Euler users and discourages the entrance of new investors. This proposal suggests using treasury assets to provide liquidity to the EUL/ETH pool. Euler uses the Uniswap v3 TWAP as the oracle to price the EUL tokens on the Euler protocol. If the liquidity in this pool is too low (~60K USD now), and the EUL debts are high enough (~3.2M USD now), it could become profitable for an attacker to manipulate the uniswap TWAP oracle to extract value from vaults that have EUL debts. By having a higher liquidity on the pool, it becomes much more expensive to manipulate the oracle, making the attack economically unprofitable. Another side effect of having higher liquidity is increasing the attractiveness of the EUL token itself, by attracting investors that would be otherwise discouraged by the lack of liquidity. With the current Uniswap liquidity, it would cost an attacker ~$50K to increase the price of EUL by 10x: As the price need to stay elevated for several minutes for the attack to work, the total cost for the price manipulation could be higher (as people start selling their EUL to the pool) . This risk could be minimized if the attack starts at a time when blockchain activity is lower. A potential attack could happen as follows: - Attacker acquires EUL tokens beforehand, perhaps slowly over time or through an OTC deal. - Swap ETH to EUL to rapidly increase the oracle price. - Liquidate vaults that have EUL debt. Any liquidator that don’t already have EUL wouldn’t be able to participate in the liquidations as the only on-chain source of EUL is too expensive. - Attacker keeps liquidating vaults until it becomes too expensive to continue manipulating the oracle or if the vaults’ collateral is fully drained. If any vaults’ collateral are drained before their debts are payed back in full, the protocol would end up with bad debt. As this debt would be all in EUL (as it is in the “isolated tier”), it would be relatively easy to repay these debts using the treasury, but a potential compensation to the affected users could be much more expensive. Specification & Implementation Use ETH and EUL from the treasury to provide liquidity to the EUL/ETH Uniswap pool. This liquidity would be owned directly by the protocol, and the swap fees would accrue back to the treasury. The LP positions would ideally be kept in a separated account controlled by the treasury. For reference, GMX keeps their liquidity in a dedicated account. A suggestion for the price ranges/token amounts are as follow: This would provide ~250K USD in ETH and EUL liquidity each, which is about half of what the uniswap pool had last month. The exact LP amounts/ranges could be changed by the team as prices and market conditions change, but inflows/outflows of tokens from the liquidity account should ideally be approved by the DAO. Voting yes signals approval of the supply of liquidity from the treasury as suggested. - Temperature check / pass - Temperature check / fail
OPCFW_CODE
4 of 4 people found the following review helpful A book for advanced programmers "Not for novices", Verified Purchase(What is this?) This review is from: Zend Framework, A Beginner's Guide (Paperback) Before you purchase this book you have to have clear understanding of php, html, mysql, and unix based operating system. The author didn't explain how to deal with php configuration details in the web server because its subject is not about LAMP stack or any other package such as wamp nor the author's job to explain to you how object oriented programming works. This is totally another area. I came from a php as a procedural programming language background, however I use C# for windows based applications which it is completely object oriented. Also I have more than a year experience in drupal as both cms and cmf, however the learning carve for drupal is steeper than Zend framework for the reason being that zend is a design pattern oriented using object oriented to organize the code. You have more freedom in web frameworks than CMSs that is heavy and UI oriented. Perhaps if you want to built a quick website with minimal effort then use CMS if you are satisfied with little code modification and small module creation. However you cannot use CMF to create a custom web application because you are restricted by a rigid framework and it takes more time to learn the huge APIs collection. Bottom line if you are code oriented then Zend is a better solution and if you are UI oriented then use CMS as framwork for what it is. Going back to the book, I am saying this because I have read most of it to chapter 10 "feed, web service" , This book is one of the best written in relation to web development. It takes you from 0 to hero in a complete illustration of real problem solving that is commonly encountered. The example through the chapters in itself 70% of what you need to know to get a real website up and running. The rest 30% is your imagination and creativity. In fact this book is only recommended for advanced and serious web developers who are novices in web application framework. In relation to the models, in a nutshell, it is the same as the controllers, however from a design point of view, it is best to code your application primarily in models and use controllers as pointers. Meaning as explained in the book, fat model skinny controller. I am glad I bought this book because I have spent a week trying to figure out zend framework. This book explains it all in a perfect introduction to a complex subject. Its added bonus is the resources provided after each chapter. Don't let the negative posts effects you by not buying this book. If you are searching hard to find a good advanced books this one is for you. The negative posts are mistaken and only for those who want to be spoon fed the knowledge. Tracked by 1 customer Sort: Oldest first | Newest first Showing 1-1 of 1 posts in this discussion Initial post: 5 Sep 2012 09:38:04 BDT Gustav de Damme says: Thanks for your review man, it was really objective and exactly what i needed as i have been shopping around for days trying to figure out what book can be suitable for me, an intermediate php/MySQL programmer, but one who is interested but knows nothing about Magento. ‹ Previous 1 Next ›
OPCFW_CODE
Today we announce the alpha release of a powerful new tool for understanding both global events and the narratives that shape how we understand them. The new GDELT 3.0 Global Frontpage Graph (GFG) is a prototype experiment to explore how we can better understand which of the myriad news stories each day are considered the most "important" by the global media ecosystem. One of the most basic measures of how a given news outlet perceives the "importance" of a story is the positioning of that story on its website. Those stories afforded precious space on the frontpage of a news outlet's website are those it considers to be the most important at the moment. The presence, position and length of time on the front page, as well as changes in its position there over time are all indicators of how the outlet's editors view the story. Thus, a news outlet might publish a steady stream of articles about Syria, but if none of those are featured on its homepage, that suggests it views them as less important than its other coverage. Similarly, out of all of the coverage of Syria that an outlet publishes today, which (if any) of those articles are featured on the frontpage? Are the frontpage selections fundamentally "different" from the rest of the outlet's Syria coverage, perhaps emphasizing a particular framing or emotional tenor? Of course, frontpage placement is not the only indicator of how an outlet perceives a given story, but it offers a powerful and globally consistent filter to help surface the stories that each outlet believes is the most important at that moment to its readership. By aggregating geographically, topically, etc, we can begin to gain a coarse understanding of the priorities of different clusters beyond simply their overall publication volume, especially in the online world in which outlets can only prioritize a small fraction of the totality of their daily output. In short, while online news outlets can publish an unlimited volume of coverage each day, the limited space of the frontpage enforces the kind of editorial selection and displacement that makes broadcast media so valuable as a barometer of media attention and agenda setting. To make it possible to incorporate this basic "importance" metric into analyses, we are launching today the alpha release of the new GDELT 3.0 Global Frontpage Graph (GFG). Every hour we scan a list of around 50,000 news website homepages from across the world and compile a list of all HTTP/HTTPS hyperlinks contained within and the order in which they appeared in the HTML of the page (we exclude all other kinds of links like email, telephone, WhatsApp, Telegram, etc). This is compiled into a single master hourly tab-delimited file that essentially catalogs all of the hyperlinks appearing on the world's news outlet homepages. All 50,000 sites are rescanned every hour, meaning you can trace at hourly resolution how stories spread through the global media ecosystem, where on the frontpage they debuted (from above the fold to buried at the bottom), how their position on the frontpage changed through the course of the day and when they finally disappeared from the frontpage. Thus, the final format of the GFG hourly file is a giant gzip'd tab-delimited file in UTF-8, one row per hyperlink found on a homepage (typically around 10M+ links per hour), with six columns: DATE, FromFrontPageURL, LinkID, LinkPercentMaxID, ToLinkURL, and LinkText. For each homepage, the extracted links are written to the GFG file in the order they appear on the page and LinkID records the order it appeared on the page (this makes it easier to perform analyses and assess just how far down or up a page a given link moved from the previous hour(s)). More detail about the fields are: - DATE. This is the date of the snapshot in YYYYMMDDHHMMSS format. This field is the same for all entries in a given file and matches the filename of the file. It is included in the data to make it easy to load the hourly files directly into a database system. - FromFrontPageURL. This is the URL of the homepage the link was found in. - LinkID. All links in a homepage are numbered sequentially from 0 to the number of links in the page, making it easy to examine links by the order they appeared on the homepage. A news site may move a given link around on its homepage over time, so its LinkID may change between each snapshot. - LinkPercentMaxID. This is simply the LinkID of the current link divided by the max LinkID for this homepage and multiplied by 100, allowing you to compare rough relative link positioning across sites. In other words, knowing that a given link has LinkID 100 is not enough to compare its position on two different sites, since one site might only have 100 links (meaning it is the last link on the site), while the other site might have 1000 links, meaning it is in the top 10% of the page. We recommend using the this field when comparing across sites. - ToLinkURL. The URL extracted from the homepage. Non-ASCII URLs that are already encoded in the original document (such as Punycode and percent-escaping) are preserved as-is, otherwise they are automatically escaped. - LinkText. Up to the first 100 characters of the link text (links longer than this are truncated and "…" is appended to the end). Non-ASCII characters across all characterset encodings are transcoded to UTF-8. For this alpha release, we include every single HTTP/HTTPS link found in an <A HREF> tag in the HTML of the page and record it in the order it appears in the HTML. This initial set of around 50,000 websites was provided by a tremendous number of different organizations and researchers from across the world and is currently separate from the main GDELT monitoring catalog (though we will be integrating them in the release of GDELT 3.0 this spring). Thus, not all websites on this list are monitored by GDELT at the moment and not all sites monitored by GDELT are on this list. Given the immense and growing interest in understanding how narratives are both unifying and dividing societies throughout the world, we include a wide range of outlets, ranging from traditional mainstream general news sites to topical and specialty outlets to select governmental and NGO news rooms to high profile citizen media sites and a growing collection of partisan, satirical, "fake news" and divisive outlets as part of our efforts to help understand the broader contours of the narratives that are shaping our societies. The presence or absence of a given website on this list does not indicate any editorial statement regarding its stature or status as a "news" outlet and if there are sites that you would like added to this list please email us, as we're eager to rapidly grow this list to encompass the broadest possible contours of the global media ecosystem. For sites in which the majority of hyperlinks are contained in the static HTML, we fetch the HTML and extract all <A HREF> hyperlinks in order of their appearance in the HTML. Absolute hyperlinks are extracted as-is, while relative links are derelativized. Links that include non-HTTP/HTTPS protocols (mailto:, tel:, whatsapp:, tg:, etc) are excluded. Links are written to the GFG file in the order they appear in the HTML. Given the complex CSS styling used on some news websites, the order links appear in the HTML does not always perfectly match the order they appear on the page (in particular, headers/footers and insets may sometimes appear later or earlier in the HTML compared to where they appear on the page), but the ordering tends to match overall. Thus, links found earlier in the HTML tend to be those that appear towards the top of the frontpage, while those found towards the end of the HTML tend to be footers and other end-of-page links. Horizontal ordering is also typically reflected in the order links appear in the HTML, with sequences of links in the HTML appearing from top to bottom and left to right on the page. The same link may appear multiple times on a frontpage in different sections, which is often important information, so we do not deduplicate links – we output them as-is in the order they appear on the page. Some sites will double-link all stories by including a link in both the illustrative image for a story and a "read more" link beneath the image or headline, so when incorporating the number of times an article is linked from a homepage into your analyses, keep this in mind. For the small handful of sites like CNN that incorporate such extensive dynamic content generation that the majority of the homepage is not accessible via the original HTML, we fully render the page using the latest version of Headless Chrome in desktop mode, using dynamic "adaptive behavioral scrolling" to mimic how a human scrolls through a page, including pausing to skim content sections as they dynamically load in and jumping through the page. Traditional Headless Chrome rendering typically misses the later content sections of such dynamic sites due to the way they implement their lazy loading triggers and thus the behavioral scrolling we use ensures the entire page loads. For infinite scrolling homepages we load either the portion of the page available in static HTML format or, for dynamic-only infinite scrolling sites, we load up to the first 16,000 pixels of height in Headless Chrome, though the final content section on the page may only be partially rendered and thus missing a few links. The final fully rendered HTML is then used to extract the <A HREF> links exactly as we do for static HTML pages. The final GFG file is typically completed and ready for download around 30 minutes after each hour. The first available GFG file is http://data.gdeltproject.org/gdeltv3/gfg/alpha/20180302020000.LINKS.TXT.gz and the filename follows the format YYYYMMDDHH0000.LINKS.TXT.gz in the UTC timezone. For those wishing to download the GFG file hourly as it becomes available, check http://data.gdeltproject.org/gdeltv3/gfg/alpha/lastupdate.txt at 30-40 minutes after each hour to be alerted when the latest file is available for download. The full dataset is also available in Google BigQuery in a partitioned table, as gdelt-bq:gdeltv2.gfg_partitioned. Note that in this alpha release we output the complete list of every URL found in each homepage. Future versions may filter this list to only record URLs that have been added, removed, or changed position on the page, so we're very interested in your feedback in just what kind of filtering or format is most useful to your work. Please get in touch and let us know of sites you want added, filtering that would be useful to you, etc, as we build upon this alpha release. Happy analyzing! - http://data.gdeltproject.org/gdeltv3/gfg/alpha/lastupdate.txt (Check 30 minutes after each hour for the latest file URL). - Google BigQuery gdelt-bq:gdeltv2.gfg_partitioned.
OPCFW_CODE
Recently, I came across a proof of concept (POC) where the customer had deployed edge node VMs in small form factor. After the deployment and successful POC, we decided to move to production and came across this issue where we needed to re-size our edge node VMs to medium size to support at least one Load balancer. So a big gotcha, even though you might want to just do a POC and are planning to leverage the load balancer functionality in the future, start with at least a medium size edge node VM. A quick recap on the different edge VM sizes NSX-T supports as of NSX-T 2.4 Now let’s say you have already deployed the edge nodes VMs in small form factor in the edge cluster, how do you go about resizing your edge VMs in NSX-T? There is no “Change appliance size” knob to resize the edge VMs like in NSX-v but there is a step by step process you can follow to achieve similar results. Let’s see how. Let’s look at our current deployment first. We have three edge node VMs configured as transport nodes and status is up. We have an edge cluster with two small edge node VMs – edge-01a and edge-02a. We are looking to swap the medium-edge in place of edge-02a. Let’s check the state of the T0 and T1 SRs on this edge node. Set the edgenode-02a in maintenance mode. If there are any T0 or T1 SRs in active state, setting the edge node in maintenance mode will trigger a HA failover of the active SRs onto the other edge node. This will cause temporary traffic disruption of the active SRs hosted on this edge node as there is a failover involved. Now, let’s see how to swap this edge node (in maintenance mode) with medium-edge. On the NSX-T Manager, navigate to Fabric->Nodes->Edge Clusters and then select the edge cluster. From the actions menu, select “Replace Edge Cluster Member” Select the small edge node edge-02a that you want to replace with the medium-edge node. As we can see below, the edge-02a has been successfully replaced with the medium-edge. Now, lets check the status of our logical routers on the medium-edge. As shown below, all the T0s and T1s have successfully migrated from edge-02a to medium-edge and the SRs show up in Standby state. You can similarly deploy another edge node VM in medium form factor and replace edge-01a so both the edge nodes in the edge cluster are of medium form factor. In summary, although not a one-click option, it is pretty straightforward to resize your edge node VMs in NSX-T.
OPCFW_CODE