Welcome, Guest: Register On Nairaland / LOGIN! / Trending / Recent / New
Stats: 3,150,437 members, 7,808,563 topics. Date: Thursday, 25 April 2024 at 01:29 PM

Ttacy341's Posts

Nairaland Forum / Ttacy341's Profile / Ttacy341's Posts

(1) (of 1 pages)

Education / Microsoft Dynamics AX Interview Questions by Ttacy341(f): 2:03pm On Mar 01, 2019
If you're looking for Microsoft Dynamics AX Interview Questions & Answers for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Microsoft Dynamics AX has a market share of about 6.6%. So, You still have opportunity to move ahead in your career in Microsoft Dynamics AX Development. Mindmajix offers Advanced Microsoft Dynamics AX Interview Questions 2019 that helps you in cracking your interview & acquire dream career as Microsoft Dynamics AX Developer.

Wish to Learn Microsoft Dynamics AX? Enroll now for FREE demo on [url="https://mindmajix.com/microsoft-dynamics-ax-training"]Microsoft Dynamics AX Training[/url].
Q. What is Microsoft Dynamics AX?
Microsoft Dynamics AX is multi-language, multi-currency, industry-specific, global ERP Product and one of the Microsoft’s Dynamics ERP Family.

Q. Difference between edit and display method
Display Indicates that the method’s return value is to be displayed on a form or a report.
The value cannot be altered in the form or report
Edit Indicates that the method’s return type is to be used to provide information for a field that is used in in a form. The value in the field can be edited.

Q. Difference between perspectives and table collection
Perspectives can organize information for a report model in the Application Object Tree (AOT).
A perspective is a collection of tables. You use a report model to create reports.
Table collection is a collection of table, which sharing across all the virtual companies.

Q. Why we use virtual companies?
Virtual company accounts contain data in certain tables that are shared by any number of company accounts. This allows users to post information in one company that will be available to another company.

Q. How can we restrict a class to be further extended?
Using Final Keyword for ex: public final class

Q. Which classes are used for data import export?
SysDataImport and SysDataExport

Q. From which table u can get the user permissions stored in Ax?
AccessRightList table.

Q. What should we do if we need last record to be active when a form is opened?
In properties of datasource table set the StartPosition property as last.

Q. What is the sequence of events while a report is generated?
Init, Run, Prompt, Fetch, send,Print

Q. Name few X++ classes/Coreclasses related to Queries?
Query, QueryRun, QueryBuildRange, QueryBuildDataSource, QueryBuildLink

Q. What is an index?
An index is a table-specific database structure that speeds the retrieval of rows from the table. Indexes are used to improve the performance of data retrieval and sometimes to ensure the existence of unique records.

Q. Define IntelliMorph
IntelliMorph is the technology that controls the user interface in Microsoft Dynamics AX. The user interface is how the functionality of the application is presented or displayed to the user.
IntelliMorph controls the layout of the user interface and makes it easier to modify forms, reports, and menus.

Q. Define MorphX
The MorphX Development Suite is the integrated development environment (IDE) in Microsoft Dynamics AX used to develop and customize both the Windows interface and the Web interface.

Q. Define X++
X++ is the object-oriented programming language that is used in the MorphX environment.

Q. Differentiate refresh(), reread(), research(), executequery()
refresh() will not reread the record from the database. It basically just refreshes the screen with whatever is stored in the form cache.
reread() will only re-read the CURRENT record from the DB so you should not use it to refresh the form data if you have added/removed records. It’s often used if you change some values in the current record in some code, and commit them to the database using .update() on the table, instead of through the form datasource. In this case .reread() will make those changes appear on the form.
research() will rerun the existing form query against the data source, therefore updating the list with new/removed records as well as updating existing ones. This will honour any existing filters and sorting on the form.
executeQuery() is another useful one. It should be used if you have modified the query in your code and need to refresh the form. It’s like
research() except it takes query changes into account.

Q. Define AOT
The Application Object Tree (AOT) is a tree view of all the application objects within Microsoft Dynamics AX. The AOT contains everything you need to customize the look and functionality of a Microsoft Dynamics AX application

Q. Define AOS
The Microsoft Dynamics AX Object Server (AOS) is the second-tier application server in the Microsoft Dynamics AX three-tier architecture.
The 3-tier environment is divided as follows:
1. First Tier – Intelligent Client
2. Second Tier – AOS
3. Third Tier – Database Server
In a 3-tier solution the database runs on a server as the third tier; the AOS handles the business logic in the second tier. The thin client is the first tier and handles the user interface and necessary program logic.

Q. Difference between temp table and container.
1. Data in containers are stored and retrieved sequentially, but a temporary table enables you to define indexes to speed up data retrieval.
2. Containers provide slower data access if you are working with many records. However, if you are working with only a few records, use a container.
3. Another important difference between temporary tables and containers is how they are used in method calls. When you pass a temporary table into a method call, it is passed by reference. Containers are passed by value. When a variable is passed by reference, only a pointer to the object is passed into the method. When a variable is passed by value, a new copy of the variable is passed into the method. If the computer has a limited amount of memory, it might start swapping memory to disk, slowing down application execution. When you pass a variable into a method, a temporary table may provide better performance than a container

Q. What is an EDT, Base Enum, how can we use array elements of an EDT?
EDT – To reuse its properties. The properties of many fields can change at one time by changing the properties on the EDT. Relations can be assigned to an edt are known as Dynamic relations.
EDT relations are Normal and Related field fixed.
Why not field fixed – field fixed works on only between two tables 1- 1 relation. And Related field fixed works on 1- many tables.so edt uses related field fixed.
BaseEnum – which is a list of literals. Enum values are represented internally as integers. you can declare up to 251 (0 to 250) literals in a single enum type. To reference an enum in X++, use the name of the enum, followed by the name of the literal, separated by two colons . ex -NoYes::No.

Q. Definition and use of Maps, how AddressMap (with methods) is used in standard AX?
Maps define X++ elements that wrap table objects at run time. With a map, you associate a map field with a field in one or more tables. This enables you to use the same field name to access fields with different names in different tables. Map methods enable to you to create or modify methods that act on the map fields.
Address map that contains an Address field. The Address map field is used to access both the Address field in the CustTable table and the ToAddress field in the CustVendTransportPointLine table

Q. What is the difference between Index and Index hint?
Adding the “index” statement to an Axapta select, it does NOT mean that this index will be used by the database. What it DOES mean is that Axapta will send an “order by” to the database. Adding the “index hint” statement to an Axapta select, it DOES mean that this index will be used by the database (and no other one).

Q. How many types of data validation methods are written on table level?
validateField(),validateWrite(),validateDelete(),aosvalidateDelete(),
aosvalidateInsert(), aosvalidateRead(),aosvalidateUpdate().

Q. How many types of relations are available in Axapta, Explain each of them.
Normal Relation: enforce referential integrity such as foreign keys. For displaying lookup on the child table.
Field fixed: works as a trigger to verify that a relation is active, if an enum field in the table has a specific value then the relation is active. It works on conditional relations and works on enum type of data.
Ex- Dimension table
Related field fixed: works as a filter on the related table.it only shows records that match the specified value for an enum field on the related table.

Q. When the recid is generated, what is its utility?
When the record is entered in the table the recid is generated by the kernel.it is unique for each table.

Q. Difference between Primary & Cluster index.
Primary index: It works on unique indexes. The data should be unique and not null. Retrieve data from the database.
Clustered Index: It works on unique and non unique indexes.retrieve data from the AOS.
The advantages of having a cluster index are as follows:
1. Search results are quicker when records are retrieved by the cluster index, especially if records are retrieved sequentially along the index.
2. Other indexes that use fields that are a part of the cluster index might use less data space.
3. Fewer files in the database; data is clustered in the same file as the clustering index. This reduces the space used on the disk and in the cache.
4. The disadvantages of having a cluster index are as follows:
5. It takes longer to update records (but only when the fields in the clustering index are changed).
6. More data space might be used for other indexes that use fields that are not part of the cluster index (if the clustering index is wider than approximately 20 characters).

Check Out Microsoft Dynamics AX Tutorials

Q. How many kind of lookups can be made and how.
By using table relations
1. Using EDT relations.
2. Using morphx and using X++ code(Syslookup class).

Q. How many types of Delete Actions are there in Standard Ax and define the use of each
1. None
2. Cascade
3. Restricted
4. Cascade+Restricted.

Q. What is the function of super()
This method calls the system methods to execute.
It is used to instantiating the variables at the parent class. Used for code redundancy.

Q. Utility and use of find method.
All the tables should have at least one find method that selects and returns one record from the table that matches the unique index specified by the input parameters. The last input parameter in a find method should be a Boolean variable called for update or update that is defaulted to false. When it is set to true, the caller object can update the record that is returned by the find method.

Q. What are the different types of Table groups defined on table properties?
1. Miscellaneous
2. Parameter
3. Group
4. Main
5. Transaction
6. WorkSheetHeader
7. WorkSheetLine

Q. Multiple inheritance possible or not, if not how can we overcome that.
In X++, a new class can only extend one other class; multiple inheritance is not supported. If you extend a class, it inherits all the methods and variables in the parent class (the superclass).
We can use Interfaces instead of multiple inheritance in Ax.

Q. Do we need to write main method, give reasons
Yes, but to open the class from action menu item we have to create main method of class.

Q. What is difference between new & construct method
new(): used to create a memory to the object.
Construct(): You should create a static construct method for each class. The method should return an instance of the class.

Q. What is the utility of the RunOn property
Application objects such as reports, tables, and methods can run on the application object server (AOS) or the client. An object can also have the RunOn property value set to Called from. Objects set to Called from can run from either the client or server, depending on where the object is called from. This topic describes the RunOn property, tiers that class objects can run on, and hints about using AOSRunMode.
Default value of RunOn for Classes – Called from
MenuItems – Client

Q. What is main class used in batch process OR which class will you inherit to make a batch job
RunBaseBatch class

Q. How can we make a batch job occur at regular interval
– Using RunbaseBatch

Q. What is the main utility of classes in standard Ax
– For business logic

Q. Which class is called when we create a SO/PO.
SalesFormLetter and PurchFormLetter

Q. What is the basic structure of a form
Methods,DataSources,Design.

Q. Properties of a form datasource
Name, Table, Index, AllowCheck, AllowEdit, AllowCreate,
AllowDelete, StartPosition,JoinSource, LinkType.

Q. validateWrite() method can be written in form datasource as well as table level, when should we write it in form DS and when in table. Similar in case of write() method
When we want the validation at the table level means in every form where this table is uses, we can write at the table level.
If we want validations at the particular form and it doesn’t effect to the other forms where this table was used, then we can use form level validations.

Q. How can we call table level methods from form DS (similar methods)
By creating the variable to the table and with tablevariable.methodname()

Q. What is the difference between form init() & DS init()
Form init(): init is activated immediately after new and creates the run-time image of the form.
DS init(): Creates a data source query based on the data source properties.
The form data source init method creates the query to fetch data from the database and sets up links if the form is linked to another form.

Q. When a form opens what are the sequential methods called.
Form init(), DataSource init(), Form run(), DataSource executeQuery(), canClose(), close().

Q. Where is the best place to write code to perform filter in a form
FormDataSource – executeQuery() and call this method in the design field of the form.

Q. What are the different types of menu items available, explain each of them
Display – for Form
Output – for Report
Action – for classes.

Q. Difference between pass by reference and pass by value?
Pass By Reference: In Pass by reference address of the variable is passed to a function. Whatever changes made to the formal parameter will affect to the actual parameters
– Same memory location is used for both variables.(Formal and Actual)-
– it is useful when you required to return more than 1 values
Pass By Value:
– In this method value of the variable is passed. Changes made to formal will not affect the actual parameters.
– Different memory locations will be created for both variables.
– Here there will be temporary variable created in the function stack which does not affect the original variable.
In case of pass by value, the change in the sub-function will not cause any change in the main function whereas in pass by reference the change in the sub-function will change the value in the main function.
Pass by value sends a COPY of the data stored in the variable you specify, pass by reference sends a direct link to the variable itself. So if you pass a variable by reference and then change the variable inside the block you passed it into, the original variable will be changed. If you simply pass by value, the original variable will not be able to be changed by the block you passed it into but you will get a copy of whatever it contained at the time of the call.

Q. What are the two most important methods on the Report?
Init(),run(),fetch(),send(),print()

Q. Visual SourceSafe and MDAX 4.0
I have installed Visual SourceSafe version 6 sp6. I want to use it inside AX. When I use the development tools version control – setup system settings and add a database, I receive this error:
“COM object of class ‘SourceDepot.SDConnection’ could not be created. Ensure that the object has been properly registered on computer ‘WMLI009230’”.
If I then use version control – setup – Version control parameters and change the Version control system to Visual Source Safe I receive this error:
“Cannot create instance of CLSID_VSSDatabase. Check that Visual SourceSafe client is installed properly”.
Do you have any solution to this problem?
A: Yes, there is a solution. You need to use VSS 2005.
Education / Security In The Docker by Ttacy341(f): 4:02am On Feb 25, 2019
>> The security of the Docker is very important. This is because it is used in production environments. If its security is not enhanced, then private data and information can be lost and get into wrong hands.

>> The first measure of ensuring security in the Docker is the use of the “docker” group. If you do not how to do this, consult the book “Docker. The first look” by Kevin Watts. Users who have been added to this group can freely access the computer and carry out any tasks including modifying the file systems. This explains why you need to be careful while adding users to the group. Only trusted users should be added to the group.

Learn how to use Docker, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enrol for Free Docker Training Demo!

>> Also, the Docker has also introduced the flag “–security-opt” to the command line. With this flag, the users will be able to set AppArmor and SELinux profiles and labels. Suppose that you came up with a policy which allows the container to listen only to Apache ports. If this policy was defined in svirt_apache, then it can be applied to your container by use of the following command:



>> This will make the process of running docker-in-docker by the users very easy as they will not have to use the “docker run –privileged” on the above kernels.
Nairaland / General / 10 Places To Study Blockchain Technology Courses Online by Ttacy341(f): 12:32pm On Sep 27, 2018
Blockchain technology became popularised through the invention of Bitcoin in 2008 by the allegedly Japanese Satoshi Nakamoto, unknown inventor of Bitcoin, the first blockchain and the first decentralized digital currency.

Blockchain technology has since then become the foremost software or platform upon which other digital assets are built. With the explosion of other cryptocurrency and uses of blockchain technology, many are looking to build their knowledge in this area to enable them to make the best of this opportunity. Blockchain technology is being used by revolutionary minds not just in the financial sector or investment industry but also in areas such as fashion, healthcare, gaming, and more. It is possible to apply this technology to literally every area of life with the assurance of security and integrity. If you are looking for online courses on the blockchain, you should check out some of these:

1) Mindmajix

Mindmajix offers a self-paced learning environment for individuals alongside corporate training solutions to provide for different needs in its training on blockchain technology. It has high-quality content for its blockchain certification training course which was designed by experts in the industry and meets the different needs of its participants. Overall, Mindmajix seeks to teach everything participants to need to know about blockchain technology to equip them for dealing with and engaging in the industry.

2) B9 Lab Academy

The B9 Lab Academy provides extensive courses on blockchain technology for developers, analysts, technical executives, and other technical experts. It caters to specific and expert interest as its courses aim to provide experienced technical participants with the extensive information needed to understand the technology behind blockchain, smart contracts, how they can be applied and the technical as well as social frameworks behind them. Its range of courses includes the Hyperledger Fabric but also includes Blockchain theories. The combination of free and paid courses are best suited to the decision makers who should have this information in order to make the right decisions for the organization or products.

3) Udemy

The Udemy online platform offers comprehensive courses on Blockchain technology and the blockchain ecosystem, catering to an array of users from the beginner to the experts. Some of its courses can take anyone with zero knowledge to a full understanding of Blockchain, how it works, how it can be applied and what surrounds it. The courses on Udemy are offered by experts in the field who can deliver on important areas such as security as well as on others such as the essentials. It offers all the resources on blockchain technology, development, cryptography, tips and tools.

4) Blockgeeks

Blockgeeks is an on-demand training and learning platform for those interested in learning about blockchain technology. It offers introductory courses, weekly mentoring, technical and non-technical courses, master courses, information on smart contracts, block chain application and development and much more. Its wide range of courses feature videos from expert teachers and a continuously growing library, updated with the latest in blockchain technology. It also offers an opportunity to practice with quizzes and interactive code challenges.

5) IBM:

IBM offers some of the most extensively taught courses on blockchain on its website and on its Cognitive Class platform. Although they do not offer certifications, they are amongst the most well taught, expert courses on blockchain technology. IBM’s Blockchain 101: Quick-Start Guide which is based on the Hyperledger Fabric from the Linux Foundation is a collaboration of both elements to deliver expert knowledge of blockchain technologies for developers. It is taught by developers and will offer knowledge on public and private blockchains as well as distributed ledger technologies. It is however made for experts and developers who already have prior knowledge of Bitcoin and cryptocurrencies.

6) Khan Academy

Khan Academy is a non-profit, training facility that seeks to provide free education to anyone, anywhere leveraging the power of the internet. It is taking this initiative into the blockchain technology sector by offering free courses on Blockchain. It offers a number of courses on Bitcoin and blockchain in general which are beneficial to both beginners and experts as it breaks down blockchain technology effectively and for free. It is a good place to start and build up the required foundation for engaging blockchain technology.

7) Class Central

Class Central helps you find free online Blockchain courses and Massive Open Online Courses (MOOCs) from various top universities and colleges from around the world. These classes cover blockchain in general as well as in specific sectors or topics such as the energy sector. There are a number of self-paced courses as well as some with set timelines. It shows a range of courses from different sites and providers which provide you with the necessary information on blockchain technology and its application.

cool Coursera

Coursera offers over 17 blockchain courses from top universities, colleges and institutions around the world. Its wide array of courses include the popular Princeton blockchain course but also others from Rutgers University, IBM, Princeton, New York State University, NYU’s Tandon School of Engineering, the Indian School of Business, and much more. They cover foundational bitcoin and blockchain theories as well as issue topics such as security and software design.

9) Blackstrap

Blackstrap offers a beginners guide to blockchain technology in a series of slides covering topics such as transactions, blockchains, mining, and more. It is a free course with no certification but can be taken anywhere at a self-paced manner. It does not require any registration and is one of the easiest ways to learn about blockchain technology.

10) Edx

Edx is a wellorganizerganiser of MOOCs which hosts university courses online in various fields. It offers a unique blockchain course tailoa red to business application of this technology. Its Blockchain for Business course introduces Hyperledger and teaches how to utilise this innovative technology for businesses.

With blockchain technology making such a huge splash in various industries, the education sector is not left out. One of the challenges however is the constantly changing ecosystem which necessitates constant learning and keeping up to date with developments in the industry. However, these courses offer a great foundation for anyone looking to learn about blockchain technology.

Sourse Of Article : www.m300ministries.org

1 Like 1 Share

Education / Azure Active Directory by Ttacy341(f): 6:29am On Sep 12, 2018
Azure Active Directory is an open, flexible and an enterprise-grade cloud offering of a computing platform for various ranges of customers.

It is a growing collection of cloud services for building, deploying and testing your applications. It also provides you with the freedom to build and deploy your applications wherever you want on the Azure cloud for your usage.

Any of the Azure applications that you would be using has Azure Active Directory (AD) services running underneath to authorize and authenticate your applications and services.

Azure Active Directory

Microsoft Azure Active Directory Azure Active Directory, in short, is known as the Azure AD, can be referred to as Microsoft’s multi-tenant and also the cloud-based directory and identity management service.

Azure Active Directory services put all the three services (namely Core directory services, application access management and identity governance) into one single service to provide the best of the lot in the Azure realm. Azure Active Directory services with its centralized policy and rules enable developers to handle access control to their applications.

[center]Accelerate your career with Microsoft Azure Certification Training and become an expert in Microsoft Azure.[/center]

Azure Active Directory services provide an affordable and a manageable solution to provide SSO access to thousands and Cloud SaaS applications like Office 365, DropBox and Concur which enables the IT admins to manage easily.

For developers, it allows you to focus on developing the applications faster with a simpler API to consume from the identity management standpoint.

Azure Active Directory (AD) services also provide options as like multi-factor authentication, device registration, and self-service password management alongside the general active directory functionalities of plain old authentication and authorization.

The major advantage is that the Azure Active Directory services can be integrated with the core Windows Active Directory services by just 4 clicks, giving the administrators the peace of mind in managing all the authorization, authentication requests at one place.

It would be a shame to not to mention that every Office 365, Azure and Dynamics CRM tenant is already an Azure AD services tenant by default.

How reliable is the Azure Active Directory (AD) service?

The most important factors that come into play when you want to choose the best of the active directory services of Azure, they are multi-tenant aware, geo-distributed and highly available.

Azure AD services come with automatic failure option as Azure runs out of its 28 data centers around the world, with a replication factor or 2, you don’t even have to worry about any possible data loss.

To be even precise, every of the Microsoft’s own cloud offerings depend on Azure AD services for their identity needs.

The free edition of Azure AD services, as an administrator, you can manage users and groups, synchronize with on-premises directories, SSO, Office 365 and many other SaaS offerings like Workday, Concur, Google Apps, Baux and many more.

In addition to these free edition capabilities, it would be better to add the paid services like Azure Active Directory Basic, Premium P1, Premium P2 editions too.

These are explained in detail, but a common thing with all these three is that they are built on Azure AD Free edition to provide additional capabilities like Spanning self-service, security reporting, monitoring enhancements, multi-factor authentication and safer access to a mobile workforce.

Azure Active Directory Connect Your Identity Bridge

Azure Active Directory Basic:
This is designed for task workers with cloud-first requirements. Provides enhanced productivity, cost-effective features like group-based access management, self-service password reset for cloud applications, and Azure AD Application Proxy – all of these backed by wonderful SLA of 99.9% availability.

Azure Active Directory Premium P1:

This is designed to provide better features over and above the basic free edition of Azure AD services with feature-rich enterprise-level identity management capabilities.

This is the perfect edition of the Azure AD services with almost all the services and features that are required for the Information Workers. This edition supports advanced administration, delegation services, and dynamic groups.

Azure Active Directory Premium P2:

This is designed with the most advanced ways and means of protection for all your users and administrators. This edition of the AD services includes all the capabilities in Azure AD Premium P1 as well as our new Identity Protection.

Azure AD’s Identity Protection feature take advantage on the billions of signals to provide the most efficient and risk-based conditional access to your applications data. Helps discover, restrict and monitor administrators and access to resources.

Benefits of Azure Active Directory Services Identity and access management for the cloud:

Azure Active Directory (Azure AD) is an identity and access management cloud solution which gives you a robust set of capabilities to manage users and groups.

It helps secure access to on-premises and cloud applications, including Office 365 and service (SaaS) applications. As explained earlier, Azure AD comes in three editions: Free, Basic and Premium.

Protect sensitive data and applications:

Azure Multi-Factor Authentication avoids unauthorized access to on-premises and cloud applications by providing an additional level of authentication.

Protect your business and mitigate potential threats with security monitoring, alerts and machine learning-based reports that identify inconsistent access patterns.

Frequently Asked Azure Interview Questions & Answers

Enable self-service for your employees:

Delegate important tasks to your employees, such as resetting passwords and creating and managing groups. Provide self-service password change, reset and self-service group management with Azure AD Premium.

Azure Active Directory

Integrate with Azure Active Directory:

We can extend any of the active directory services to get integrated with the Azure AD services to enable SSO for all applications. User attributes can be synchronized automatically to your cloud AD from any other on-premises directory that you log in from.

Conclusion:

Azure’s Active Directory services bring all the enterprise directory and identity management to the cloud as a one-stop shop solution, which caters all the identity management requirements.
Education / Azure Data Factory - Data Processing Services by Ttacy341(f): 7:17am On Sep 04, 2018
Azure Data Factory will give you an in-depth information about how to utilize it efficiently and effectively.

Microsoft Azure is another offering in terms of cloud computing. It is one of the growing collections of cloud services where developers and IT professionals can utilize this platform to build, deploy and manage applications from any part of the global network of data centers. Using this cloud platform, one will get enough freedom to build and deploy applications from wherever he/she wants to practice using the tools that are available in Microsoft Azure.

Accelerate Your Career with Microsoft Azure Training and become expert in Microsoft Azure Enroll For Free Microsoft Azure Training Demo!
Azure Data Factory
Let us understand what is Azure data factory and how it is helping organizations and individuals in terms of accomplishing their day to day operational tasks.

Let say a gaming company is storing a lot of log information so that later on they can take collective decisions on certain parameters and they utilize this log information.

Usually, some of the information is stored in on-premise data storage and the rest of the information is stored in the cloud.

So to analyze the data we need to have an intermediary job that consolidates all the information into one place and then analyze the data by using Hadoop in the cloud (Azure HDInsight) use SQL server on data storage premises. Let say this process runs once in a week.

This is a platform where the organizations can create a workflow and can ingest the data from on-premise data stores and also from the cloud stores.

Including the data from both these stores, the job can transform or process data by using Hadoop where it can be used for BI applications.

The above platform is much needed for all the organizations and Azure data factory is one of the biggest players in this genre.

Azure Data Factory Following Activities

1. First of all, it is a cloud-based solution where it can integrate with different types of data stores to gather information or data.

2. It helps you to create data driven workflows to execute the same

3. All the data driven workflows are called “pipelines”.

4. Once the data is gathered, processing tools like Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics can be used where the data can be transformed and can be pass to the BI professionals where they can analyze the data.

In a sense, it is an Extract and Load (EL) tool where it will then Transform and Load (TL) platform rather than our traditional methods of Extract, Transform and Load (ETL) tool.

As of now, in Azure Data Factory, the data is consumed and produced by the defined workflows where it is time-based data (i.e. it can be defined for hourly, daily, weekly etc).

So based on this parameter is set, the workflow would execute and does the job, i.e it happens on an hourly basis or on daily basis. It is all based on the setting.

Frequently Asked Azure Interview Questions & Answers

Workflow In Depth:

As we have discussed, a pipeline is nothing but a data-driven workflow wherein Azure Data Factory it is executed in three simple steps, they are:

1. Connect and Collect

2. Transform and Enrich

3. Publish

Connect and Collect:

When it comes to data storage, especially in enterprises, a variety of data stores are utilized to store the data. The first and foremost step in the building an Information production system is to connect all the required sources of the data, such as like Saas services, file shares, FTP, web services so that the data can be pushed to a centralized location for data processing.

Without a proper data factor, the organizations have to build or develop a custom data movement components so that the data sources can be integrated. This is an expensive affair without the use of Data Factory.

Related Page: Azure Stack

Even though if these data movement controls are custom build then it lacks the industry standards where the monitoring and alerting mechanism isn’t that effective when it is compared to the industry standard.

So data factor makes is comfortable for the enterprises where the pipelines would take care of the data consolidation point. For example, if you want to collect the data at a single point then you can do that in Azure Data Lake store.

Further, if you want to transform or analyze the data then the cloud source data can be the source and analysis can be done by using Azure Data Lake Analytics, etc.

Check out Microsoft Azure Tutorials

Transform and Enrich:

As completing the connect and collect phase, the next phase is to transform the data and massage it to a level where the reporting layer can be utilized and harvest the data and generate respective analyzed reports.

Tools like Data Lake Analytics and Machine learning can be achieved at this stage.

Within this process, it is considered to be reliable because the produced transformed data is well maintained and controlled.

Publish:

Once the above two stages are completed, the data will be transformed to a stage where the BI team can actually consume the data and start with their analysis. The transformed data from the cloud will be pushed to on-premises sources like SQL Server.

Key Components:

For an Azure subscription, Azure data factory instances can be more than one and it is not necessary to have one Azure data factory instance for one Azure subscription. The Azure data factor is defined as four key components that work hand in hand where it provides the platform to effectively execute the workflows.

Pipeline:

A data factory can have one too many pipelines associated with it and it is not mandatory to have only one pipeline per data factory. Further, a pipeline can be defined as a group of activities.

Activity:

As defined above, a group of activities is called together as a Pipeline. So activities are defined as a specific set of activities to perform on the data. For example, A copy activity will only copy data from one data store to another data store.


Data Factory Supports 2 Types Of Activities:

1. Data movement activities

2. Data transformation activities

Hope you have enjoyed reading about Azure Data Factory and the steps involved of consolidating the data and transforming the data altogether. If you have any valuable suggestions that is worth reading then please do advise in the comments section below
Education / Configuring Transaction In Jboss by Ttacy341(f): 1:38pm On Apr 16, 2018
A transaction can be defined as a group of operations that must be performed as a unit and can involve persisting data objects, sending a message, and so on.

When the operations in a transaction are performed across databases or other resources that reside on separate computers or processes, this is known as a distributed transaction. Such enterprise-wide transactions require special coordination between the resources involved and can be extremely difficult to program reliably. This is where Java Transaction API (JTA) comes in, providing the interface that resources can implement and to which they can bind, in order to participate in a distributed transaction.

Inclined to build a profession as JBOSS Developer? Then here is the blog post on JBOSS TRAINING ONLINE.
The EJB container is a transaction manager that supports JTA and so can participate in distributed transactions involving other EJB containers, as well as third-party JTA resources, such as many database management systems. Within JBoss AS 7 transactions are configured in their own subsystem. The transactions subsystem consists mainly of four elements:

Core environment
Recovery-environment
Coordinator-environment
Object-store
The core environment includes the Transaction Manager interface, which allows the application server to control the transaction boundaries on behalf of the resource being managed.

A transaction coordinator, in turn, manages communication with transactional objects and resources that participate in transactions.

The recovery subsystem of JBossTS ensures that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crashes or loses network connectivity.

Frequently asked Jboss Interview Questions

Within the transaction service, JBoss transaction service uses an ObjectStore to persistently record the outcomes of transactions, for failure recovery. As a matter of fact, the RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and resources that require, or may require, recovery.

The core and recovery environment can be customized by changing their socket-binding properties, which are referenced in the socket-binding-group configuration section. You might find it more useful to define custom properties in the coordinator-environment section, which might include the default-timeout and logging statistics. Here’s a sample custom transaction configuration:



default-timeout specifies the default transaction timeout to be used for new transactions, which is specified as an integer in seconds.

enable-statistics determines whether or not the transaction service should gather statistical information. The default is to not gather this information.

Tip
How does the transaction timeout impact your applications?
The transaction timeout defines the timeout for all JTA transaction enlisted and thus severely affects your application behavior. A typical JTA transaction might be started by your EJBs or by a JMS Session. So, if the duration of these transactions exceeds the specified timeout setting, the transaction service will roll-back the transactions automatically.
Education / Taking Jboss AS 7 In The Cloud by Ttacy341(f): 1:22pm On Apr 09, 2018
Since the concepts of cloud computing are relatively new we will at first introduce a minimal background to the reader, then we will dive headlong into the OpenShift project which is split into two main areas:

The OpenShift Express service, which will be your starting objective for leveraging cloud applications

The OpenShift Flex service which can be used by advanced users for rolling in production your cloud applications

Introduction to cloud computing
What is cloud computing? We’re hearing this term everywhere, but what does it really mean? We have all used the cloud knowingly or unknowingly. If you have Gmail, Hotmail, or any other popular mailing service then you have used the Cloud. Simply put, cloud computing is a set of pooled computing resources and services delivered over the
Web. When you diagram the relationships between all the elements it resembles a cloud.

Client computing, however, is not a completely new thing in the computer industry. Those of you who have been in the trenches of IT for a decade or two should remember that the first type of client-server application were the mainframe and terminal applications. At that time, storage and CPU was very expensive and the mainframe
pooled both types of resources and served them to thin-client terminals.

Learn how to use JBOSS, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free JBOSS Training Demo!
With the advent of the PC revolution, which brought mass storage and cheap CPUs to the average corporate desktop, the file server gained popularity as a way to enable document sharing and archiving. True to its name, the file server served storage resources to the clients in the enterprise, while the CPU cycles needed to do productive
work were all produced and consumed within the confines of the PC client.

In the early 1990s, the budding Internet finally had enough computers attached to it that academics began seriously thinking about how to connect to those machines together to a create massive, shared pools of storage and compute power that would be much larger than what any one institution could afford to build. This is when the idea of “the grid” began to take shape.

Cloud Computing versus Grid Computing
In general, the terms grid and cloud seem to be converging due to some similarities; however there are a list of important differences between them which are often not understood, generating confusion and clutter within the marketplace.

Grid Computing requires the use of software that can divide and farm out pieces of a program as one large system image to several thousand computers. Hence, it may or may not be in the cloud depending on the type of use you make of it. One concern about the grid is that if one piece of the software on a node fails, other pieces of the software on the other nodes may fail too. This is alleviated if that component has a failover component on another node, but problems can still arise if the components rely on other pieces of software to accomplish one or more grid computing tasks.



Cloud Computing evolves from grid computing and provides on-demand resource provisioning. With cloud computing, companies can scale up to massive capacities in an instant without having to invest in a new infrastructure, train new personnel, or license new software. If the users are systems administrators and integrators, they care how things are maintained in the cloud. They upgrade, install, and virtualize the servers and applications. If the users are consumers, they do not care how things are run in the system.

Grid and Cloud: similarities and differences
Cloud computing and grid computing, however, do bear some similarities, and as a matter of fact, they are not always mutually exclusive. In fact, they are both used to economize computing by maximizing existing resources.

However, the difference between the two lies in the way the tasks are computed in each respective environment. In a computational grid, one large job is divided into many small portions and executed on multiple machines. This characteristic is fundamental to a grid; not so much to a cloud.

Cloud computing is intended to allow the user to avail various services without investing in the underlying architecture. Cloud services include the delivery of software, infrastructure, and storage over the Internet (either as separate components or a complete platform) based on the effective user demand.

Advantages of cloud computing
Having gone through the basics of cloud computing, we should now account for the benefits which are guaranteed when you transition to a cloud computing approach:

On-demand service provisioning: By using self-service provisioning, customers can get cloud services easily, without going through a lengthy process. The customer simply requests a number of computing, storage, software, process, or other resources from the service provider.

Elasticity: This particular characteristic of cloud computing—its elasticity— means that customers no longer need to predict traffic, but can promote their sites aggressively and spontaneously. Engineering for peak traffic becomes a thing of the past.

Cost reduction: As a matter of fact, companies are often challenged to increase the functionality of IT while minimizing capital expenditures. By purchasing just the right amount of IT resources on demand, the organization can avoid purchasing unnecessary equipment.

Application programming interfaces (APIs): The accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.

Along with these advantages, cloud computing also bears some disadvantages or potential risks, which you must account for.

The most compelling threat is that sensitive data processed outside the Enterprise brings with it an inherent level of risk, because outsourced services bypass the “physical, logical, and personnel controls” IT shops exert over in-house programs. In addition, when you use the cloud, you probably won’t know exactly where your data is hosted. In
fact, you might not even know what country it will be stored in, leading to potential issues with local jurisdiction.

As Gartner Group suggests (HTTP://WWW.GARTNER.COM), you should always ask providers to supply-specific information on the hiring and oversight of privileged administrators. Besides this, the cloud provider should provide evidence that encryption schemes were designed and tested by experienced specialists. It is also important to understand if the providers will make a contractual commitment to obey local privacy requirements on
behalf of their customers.

Types of cloud computing
Another classification of cloud resources can be made on the basis of the location where the cloud is hosted:

Public cloud: It represents the IT resources offered as a service and shared across multiple organizations, managed by an external service provider
Private cloud: It provides the IT resources dedicated to a single organization and offered on demand
Hybrid cloud: It is a mix of private and public clouds managed as a single entity to extend capacity across clouds as needed
The decision between the different kinds of cloud computing is a matter of discussion between experts and it generally depends on several key factors. For example, as far as security is concerned, although public clouds offer a very secure environment; private clouds offer an inherent level of security that meets even the highest of standards. In addition, you can add security services such as Intrusion Detection Systems (IDS) and dedicated firewalls.

A private cloud might be the right choice for large organization carrying a well-run data-center with a lot of spare capacity. It would be more expensive to use a public cloud even if you have to add new software to transform that data center into a cloud.

Frequently asked Jboss Interview Questions

On the other hand, as far as scalability is concerned, one negative point of private clouds is that their performance is limited to the number of machines in your cloud cluster. Should you max out your computing power, another physical server will need to be added. Besides this, public clouds are typically delivering a pay-as-you-go model,
where you pay by the hour for the computing resources you use. This kind of utility pricing is an economical way to go if you’re spinning up and tearing down development servers on a regular basis.

So, by definition, the majority of public cloud deployments are generally used for web servers or development systems where security and compliance requirements of larger organizations and their customers are not an issue.

As opposed to public clouds, private clouds are generally preferred by mid-size and large enterprises because they meet the security and compliance requirements of those larger organizations that also need dedicated high-performance hardware.



Layers of cloud computing
Cloud computing can be broadly classified into three layers of cloud stack, also known as Cloud Service Models or SPI Service Model:

Infrastructure as a Service (IaaS) : This is the base layer of the cloud stack. It serves as a foundation for the other two layers, for their execution. It includes the delivery of computer hardware (servers, networking technology, storage, and data center space) as a service. It may also include the delivery of operating systems and virtualization technology to manage the resources. IaaS makes the acquisition of hardware easier, cheaper, and faster.

Platform as a Service layer (PaaS) offers a development platform for developers. The end users write their own code and the PaaS provider uploads that code and presents it on the Web.

By using PaaS, you don’t need to invest money to get that project environment ready for your developers. The PaaS provider will deliver the platform on the Web, and in most cases, you can consume the platform using your browser. There is no need to download any software. This combination of simplicity and cost efficiency empowers small and
mid-size companies, or even individual developers, to launch their own Cloud SaaS.



The final segment in cloud computing is Software as a Service (SaaS) which is based on the concept of renting software from a service provider rather than that of buying it yourself. The software is hosted on centralized network servers to make the functionality available over the Web or Intranet. Also known as “software on demand,”
it is currently the most popular type of cloud computing because of its high flexibility, great services, and enhanced scalability and less maintenance. Yahoo! mail, Google docs, and CRM applications are all instances of SaaS.
You might wonder if it’s possible that some services can be defined both as a platform and as a software. The answer is, of course, yes! For example, we have mentioned Facebook: we might define Facebook both as a platform where various services can be delivered and also as business applications (Facebook API), which are developed by
the end user.

JBoss cloud infrastructure
Up until the last few months, it was common to hear that JBoss AS was still missing a cloud platform while other competitors such as SpringSource already had a solid cloud infrastructure.

Well, although it’s true that the application server was missing a consolidated cloud organization, but this does not mean that there was little or no interest on the subject. If you have a look at the JBoss world 2010 labs, there has been a lot of discussing about cloud. One first effort exhibited at JBoss labs was CirrAS (HTTP://WWW.JBOSS.ORG/STORMGRIND/PROJECTS/CIRRAS), a set of appliances that could automatically deploy a clustered JBoss AS server in the Cloud. Built using the BoxGrinder project (HTTP://BOXGRINDER.ORG/), CirrAS composed of a set of three appliances: a front-end appliance, a back-end appliance, and management appliance.Unfortunately, the project didn’t grow any further and, up to August 2011, the portfolio of JBoss cloud applications was still minute.

At that time, RedHat announced the availability of OpenShift platform for deploying and managing Java EE applications on JBoss AS 7 servers running the cloud. Finally, it’s time for the application server to spread its wings over the clouds!

OpenShift is the first PaaS to run CDI applications and plans support for Java EE 6, extending the capabilities of PaaS to even the richest and most demanding applications. OpenShift delivers two kinds of services for rapidly deploying Java applications on the cloud:

Express is a free cloud-based platform for deploying new and existing Java EE, Ruby, PHP, and Python applications in the cloud in a matter of minutes.
Flex is a cloud-based application platform for Java EE and PHP applications which can be deployed on dedicated hosting, running middleware components. Flex is an ideal platform for those who require a great degree of control and choice over their middleware components with valuable features including versioning, monitoring, and auto-scaling.
Starting with OpenShift Express
OpenShift Express enables you to create, deploy, and manage applications within the cloud. It provides disk space, CPU resources, memory, network connectivity, and an Apache or JBoss server. Depending on the type of application you are building, you also have access to a template filesystem layout for that type (for example, PHP, WSGI,
and Rack/Rails). OpenShift Express also generates a limited DNS for you.

The first thing needed to get started with OpenShift Express is an account, which can be obtained with a very simple registration procedure at:

HTTPS://OPENSHIFT.REDHAT.COM/APP/USER/NEW/EXPRESS.



Once you’ve registered and confirmed your e-mail, the next step will be installing on your Linux distribution the client tools needed to deploy and manage your applications in the cloud.

Installing OpenShift client tools
For this purpose, we suggest you use either Fedora 14 (or higher) or Red Hat Enterprise 6 (or higher).

Then you need to grab a copy of the openshift.repo file, which contains the base URL of rpm files and keys necessary to validate them. This file should be available at:

HTTPS://OPENSHIFT.REDHAT.COM/APP/REPO/OPENSHIFT.REPO.

Now, copy this file into the /etc/yum.repos.d/ filesystem using either sudo or root access privileges:

$ sudo mv openshift.repo /etc/yum.repos.d/
And then install the client tools:
$ sudo yum install rhc
Education / Elasticsearch Tutorial by Ttacy341(f): 11:18am On Apr 05, 2018
Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is used in Single Page Application (SPA) projects. Elasticsearch is open source developed in Java and used by many big organizations around the world. It is licensed under the Apache license version 2.0. In this brief tutorial, we will be explaining the basics of Elasticsearch and its features.

Audience
This tutorial is designed for software professionals who want to learn the basics of Elasticsearch and its programming concepts in simple and easy steps. It describes the components of Elasticsearch with suitable examples.

This tutorial is designed to configure the HR module of SAP in an easy and systematic way. Packed with plenty of screenshots, it will be useful for consultants as well as end-users.

Prerequisites
You should have a basic understanding of Java, JSON, search engines, and web technologies. The interaction with Elasticsearch is through RESTful API; therefore, it is always recommended to have knowledge of RESTful API.

Elasticsearch is an Apache Lucene-based search server. It was developed by Shay Banon and published in 2010. It is now maintained by Elasticsearch BV. Its latest version is 2.1.0.

Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is accessible from RESTful web service interface and uses schema less JSON (JavaScript Object Notation) documents to store data. It is built on Java programming language, which enables Elasticsearch to run on different platforms. It enables users to explore very large amount of data at very high speed.

Elasticsearch – General Features
The general features of Elasticsearch are as follows −

Elasticsearch is scalable up to petabytes of structured and unstructured data.

Elasticsearch can be used as a replacement of document stores like MongoDB and RavenDB.

Elasticsearch uses denormalization to improve the search performance.

Elasticsearch is one of the popular enterprise search engines, which is currently being used by many big organizations like Wikipedia, The Guardian, StackOverflow, GitHub etc.

Elasticsearch is open source and available under the Apache license version 2.0.

Elasticsearch – Key Concepts
The key concepts of Elasticsearch are as follows −

Node − It refers to a single running instance of Elasticsearch. Single physical and virtual server accommodates multiple nodes depending upon the capabilities of their physical resources like RAM, storage and processing power.

Cluster − It is a collection of one or more nodes. Cluster provides collective indexing and search capabilities across all the nodes for entire data.

Index − It is a collection of different type of documents and document properties. Index also uses the concept of shards to improve the performance. For example, a set of document contains data of a social networking application.

Type/Mapping − It is a collection of documents sharing a set of common fields present in the same index. For example, an Index contains data of a social networking application, and then there can be a specific type for user profile data, another type for messaging data and another for comments data.

Document − It is a collection of fields in a specific manner defined in JSON format. Every document belongs to a type and resides inside an index. Every document is associated with a unique identifier, called the UID.

Shard − Indexes are horizontally subdivided into shards. This means each shard contains all the properties of document, but contains less number of JSON objects than index. The horizontal separation makes shard an independent node, which can be store in any node. Primary shard is the original horizontal part of an index and then these primary shards are replicated into replica shards.

Replicas − Elasticsearch allows a user to create replicas of their indexes and shards. Replication not only helps in increasing the availability of data in case of failure, but also improves the performance of searching by carrying out a parallel search operation in these replicas.

Elasticsearch – Advantages
Elasticsearch is developed on Java, which makes it compatible on almost every platform.

Elasticsearch is real time, in other words after one second the added document is searchable in this engine.

Elasticsearch is distributed, which makes it easy to scale and integrate in any big organization.

Creating full backups are easy by using the concept of gateway, which is present in Elasticsearch.

Handling multi-tenancy is very easy in Elasticsearch when compared to Apache Solr.

Elasticsearch uses JSON objects as responses, which makes it possible to invoke the Elasticsearch server with a large number of different programming languages.

Elasticsearch supports almost every document type except those that do not support text rendering.

Elasticsearch – Disadvantages
Elasticsearch does not have multi-language support in terms of handling request and response data (only possible in JSON) unlike in Apache Solr, where it is possible in CSV, XML and JSON formats.

Elasticsearch also have a problem of Split brain situations, but in rare cases.

Comparison between Elasticsearch and RDBMS
In Elasticsearch, index is a collection of type just as database is a collection of tables in RDBMS (Relation Database Management System). Every table is a collection of rows just as every mapping is a collection of JSON objects Elasticsearch.
Education / Critical Infrastructure And Cyber Security by Ttacy341(f): 8:32am On Jan 23, 2018
Before the recent natural disasters, I could describe to you how we as a community might recover after a cyberattack to our critical infrastructure, but it would be hard to imagine. Some may argue that it would be too extreme of a scenario to consider and that we would never get to the point where we had to prioritize which lives to save because there is not enough gas in the generator to provide power in an operating room to operate on two patients. However, with the earthquakes and hurricanes in recent months the scenario is no longer fictional — we are now able to see what it would be like to be without critical infrastructure.

This reality has been played out on our screens — of people collecting water from natural springs, preparing food over an open fire, and communicating with AM radio or (if you’re lucky) satellite phone. We can see for ourselves how without operational critical infrastructure, daily conveniences of everyday necessities as we know them would be gone. Could the loss of critical infrastructure across a nation feasibly occur? Or would it take only major hubs of water, gas and electricity to be attacked for life as we know it to be set back 30 years before we can imagine that?

Growing Threats
Critical infrastructure cyberattacks go back as far as 1982. The first notable attack was the “Farewell Dossier” by the CIA against the Soviet Union. While this attack remains unconfirmed, it has been written about. And cyberattacks across public and private sectors continue to increase. In 2016, multi-vector attacks increased by 322 percent from 2015. How does this impact attacks against critical infrastructure? Since it has become easier to execute attacks against the private and public sector, an attack at the infrastructure level becomes more attractive especially to nation state actors. Critical Infrastructure sectors have historically been known to be slow to patch vulnerabilities and update technology. Because of these characteristics we can see the progression of critical infrastructure attacks when we look back at the past three years.

In 2014, Stuxnet made the public aware of the reality of critical infrastructure cyberattacks by a nation state. Stuxnet would also be one of the earliest examples of an IoT attack where the programmable logic controllers that connected to the system were infected. The following year an attack of the Western Ukraine electrical grid left 230,000 people without power for six hours. The root of the attack was the firmware that was overwritten across substations. By overtaking the supervisory control and data acquisition (SCADA) system the attack disabled remote operation of the substations. Other SCADA attacks occurred across Europe shortly after.

In the US on October 21, 2016, the Mirai botnet executed a DDoS attack. Comprised of 45,000 IoT bots it successfully brought down DYN, the domain provider. It impacted mainly the east coast DNS service, leaving several internet services we use as part of our everyday life (Twitter, PayPal, and others) inaccessible.

The attack was eventually resolved on the east coast, but similar attacks were later noted in parts of the west coast and Europe. Besides the websites and web services that were affected, Verizon Communications services from broadband to cell phone were also crippled, limiting the means of communication for the east coast of the US. Several groups claimed responsibility, but no one to date has been confirmed as the true attacker. Communications is defined as a “lifeline” component of critical infrastructure by the US Department of Homeland Security.

This year the systems of the National Health Services (NHS) in the United Kingdom were crippled by a WannaCry ransomware attack affecting all systems including telephones. Surgeries and medical appointments across Britain and Scotland were cancelled. The staff was forced to use private mobile phones, pen and paper, and accept only emergency patients. This could have been prevented if the systems had been up to date with OS patches to their systems. Over 300,000 computers globally were infected by the same ransomware virus, which is believed to have spread via email. The public health security is also a considered a “lifeline” component of critical infrastructure.

The Role of IoT in Critical Infrastructure

Research continues to better understand the IoT devices that may be vulnerable to the Brickerbot, the denial-of-service botnet. Brickerbot has already successfully “bricked” 5,000 IoT devices at an unnamed university in the US. The spread of this bot into government organizations could have irreversible major impacts in all areas of critical infrastructure.

The growing concerns in the advancement of critical infrastructure cyberattacks are that they may lead to the contamination of the water supply, loss of power across major cities causing everything from ATMs to traffic lights to go dark. An additional concern is that this may ultimately impact branches of the military and our national security. As demonstrated by the NHS attack, the malicious email campaign started in the private sector and spread across into the public sector. Therefore, both the private and the public sectors need to work together to successfully prevent a major attack.

Securing Critical Infrastructure Through Partnership
In the US the National Infrastructure Advisory Council (NIAC), a department of the DHS, advises on counterterrorism. The NIAC provides guidance to the Secretary of Homeland Security on the security of the critical infrastructure sectors. In August this year the NIAC published the report Securing Cyber Assets: Addressing Urgent Cyber Threats to Critical Infrastructure. The council outlined findings from interviewing [url="https://tekslate.com/cyber-security-training/"]cybersecurity[/url] industry experts, acknowledging that the private sector is in the frontlines of defense for infrastructure in the US. The overall theme of the report focused on the collaboration between public and private sectors to discuss, investigate, and take action on areas of critical infrastructure that are targets for a cyberattack.

Notable highlights of the NIAC report included:

Establishing separate secure communication networks
Forming a joint task force comprised of public and private industry experts from communications, financial, and electrical power sectors.
Creating a shared platform between private and public sector to share cyber threat information
How to Build a Secure IoT Platform
The IoT presents an emerging and significant risk to communications and infrastructure platforms. There are several layers that need to be protected to prevent threat actors from intercepting and misusing data in IoT platforms. In the private and public sector, protecting the web app or the platform that communicates with the devices at the end points is critical. With customers relying on availability, data privacy and service integrity, platform security is indispensable for businesses.

The IoT platform needs to be highly available so its devices can connect and perform their tasks. An unplanned downtime for medical device companies can be very disruptive and dangerous. What happens if your medical device cannot connect to the platform?
The data exchanged between IoT devices and its platform needs to be secure. Data and privacy breaches are getting more frequent as threat actors target personal data through attacks on web and mobile phone cameras and appliances.
The apps need to be secured so no one can manipulate devices to do malicious things. Breaches in extreme situations in connected automobiles can include hackers taking over the control of a car by hacking into the geolocation app.
Security solutions that help companies protect the platform and devices they communicate with can add the first and critical layer of protection. Their primary goal is to shield IoT platforms from any kind of external threat that may impact availability, data integrity or control.

To find out more about how to protect IoT platforms against attacks, read our blog post and see how the following threats can be blocked:

DDoS attacks
Web threats
Data theft
Automation and bots
Protecting critical infrastructure involves policies and security at a granular level. It starts with private- and public-sector collaboration supported by protecting the controls and platform for the infrastructure runs on.
Education / SSRS Training Online by Ttacy341(f): 12:32pm On Nov 24, 2017
Tekslate is the globally professional in IT courses training which emphasize on hands-on experience with examples from real-time scenarios by experts. Microsoft SSRS Training is an online course beautiful designed to make you expert in working with Microsoft SSRS product

About Course:

SQL Server Reporting Services (SSRS) is a server-based report generation software system from Microsoft. It is part of suite of Microsoft SQL Server services, including SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). While SSAS implement users to produce special databases for fast test of very large amounts of data, and while SSIS implement users to easily and quickly develop reports from Microsoft SQL Server databases.

The SSRS service bring a different combine into Microsoft Visual Studio so that developers and SQL administrators can attach to SQL databases and use SSRS tools to format SQL reports in multiple ways. SSRS also maintain a ‘Report Builder’ tool for limited IT workers to setup SQL reports of minor complexity. SSRS present a full range of ready-to-use services and tools to help you manage, create and deploy reports for your organization. Reporting Services combine programming applications that facilitate you to customize and extend your reporting functionality.

Key Features:
· Flexible Timings
· Certified & Industry Experts Trainers
· Multiple Training Delivery Models
· Customize Course
· 24/7 Support
· Hands On Experience
· Real Time Use Cases
· Q&A with Trainers
· Small Batches (1to5)
· Flexible Payments
· Job Support
· Placement Assistance.

Contact Details:
INDIA:+91-9052943398 ; USA:972-370-3060 | 973-910-5725
URL; SSRS Training
Email: info@tekslate.com
Website: http://tekslate.com/

Education / Interview Questions & Answer For Build & Release Engineer by Ttacy341(f): 1:07pm On Oct 16, 2017
I get many emails and linkedin personal message about sharing interview questions for Build & Release Engineer and Configuration Engineer.

I have asked some of my friends to share too and now here is the consolidated list of interview questions which is very much commonly asked in interview. if you like it, please share with other members and contribut it too. I will add in this article in next update.

Tag Line:

Interview Questions & Answer for Build Engineer
Interview Questions & Answer for Release Engineer
Interview Questions & Answer for Configuration Engineer
Interview Questions & Answer for Build & Release Engineer

Interview Questions and Answer on configuration management?

What do you think about configuration management?
What do you understand about Change Management?
branching methodologies and what currently theya re using it. Show with some example with pros and cons
Concept of Merging and Why do we need?
Interview Questions and Answer on build Management?

What do you think about build Management?
What are the key benefit of build Automation and what are the key inputs to automate the build process in the project?
Discuss about tools and technology which help to automate the entire build cycle.
What is Continuous Build Integration and How this is useful for the project?
What is daily build & nightly builds and what are the process need to set up to Automate & monitor consistently.
Explain in details for writing build sciprt for any project
Interview Questions and Answer on release Management?

What is release Management?
Talk about Release Management on several platforms?
What do you understand about Packaging and Deployment?
How to Automate Remote Deployment of Builds on Development & Test Servers?
Some Generic Interview questions for Build and Release and SCM Professionals.

What is workflow management. exmplain this in details
What do you understand about Code Coverage? Describe repective tools & utilities.
Describe the Integrate Packaging scripts & Test Automation scripts with build & Monitor build verification test status and tools.
How to co-ordinate with development team to increase their productiavity.
What do you understand about multisite project
How SCM team perform integration and co-ordination between Dev and QA
How do you Troubleshooting your build server. What kind of issues you get in build server or cm server?
java Comipler issues in build server and their version
C++ compiler issues in build server and their version
What are basic skills required for Perforce administration including Command Line info.
Explain the best practice for Setup process & maintain the Archive of software releases (internal & external) & license management of Third Party Libraries
Concept of labeling, branching and merging in perforce / svn and git
Best Practice and strategy of branching and merging in perforce
Talk about agile and attempts to minimize risk by developing software in short iterations.
Why agile on Iterative development model and Waterfall software development model?
What are Bug /Issue Tatcking tools available and descibe them.
Source code control best practice?

Use a reliable and dedicated server to house your code.
Backup your code daily.
Test your backup and restore processes.
Choose a source control tool that fits your organization's requirements.
Perform all tool specific administrative tasks.
Keep your code repositories as clean as possible.
Secure access to your code.
Describe software build best practices?

Fully automated build process
Build repeatability
Build reproducibility
Build process adherence
Tools Comparison and Differences

Difference Between CVS and SVN
Difference Between perforce and SVN
Difference Between perforce and Clearcasee
Difference Between VSS and TFSC
Difference Between perforce and MKS
Difference Between Bea Weblogic and IBM Websphere

for more interview questions visit: build-and-release-engineer
Education / Peoplesoft HRMS/HCM To Oracle Fusion HCM Cloud Conversion Made Easy by Ttacy341(f): 11:18am On Oct 13, 2017
· DataTerrain launches new reports conversion solution for customers who are migrating PeopleSoft HRMS/HCM to Oracle Fusion HCM cloud. Our well defined framework enables customers with PeopleSoft pre-built Tax/HR/Payroll/Benefits reports to migrate accurately to Oracle Fusion HCM as custom Extracts and Reports.

· Take advantage of our expertise to preserve years of efforts in designing and creating the custom layouts / designs and migrating to Oracle Fusion HCM cloud. DataTerrain technical team provides experience and expertise to rebuild the custom logic to HCM Extracts.

· Our framework includes the analysis phase, which examines all the elements, identifies the missing elements and provides detailed log that can be used for pin pointed analysis to narrow down to report conversion issues. Our experts can help recommend solutions for the compatibility issues and resolve.

· We can also create OTBI (Oracle Transactional Business Intelligence) analysis with the required prompts and fields and embedded in dashboard for the users to view the report. The interactive reports are visually and feature rich. Users can also create ad hoc reports, dashboards and alerts to aid daily decision making using OTBI.

· Interested to know further? - For a limited time, Dataterrain offers to analyze a set of your reports, metadata and demo a proof of concept using our framework and automation tool at no cost! Please contact connect@dataterrain.com for a free proof of concept to have direct real-time experience.
Education / Tibco-bw Online Training With Live Project by Ttacy341(f): 4:25am On Oct 10, 2017
Gone are the days, when you had to go to pricey & exhaustive coaching classes to prepare for Tibco-bw certification today, with everything going digital, everything you need is right at the comfort of your home.

What if you are into graduation or currently working? It will be difficult to find right resources for preparation. Right!! Worry Not. We have got you something you are looking for with the best of market price to provide you best of training by experienced professionals.

Tekslate is the globally professional in IT courses training which emphasize on hands-on experience with examples from real-time scenarios by experts.It is the largest provider of high quality online training courses. It is conceptualized and initiated by several multidisciplinary and ingenious software technocrats having a combined experience of more than 10 yrs in the industry.

About Course :

TIBCO is a robust and highly scalable platform that can be used to develop new services, automate the business processes, and integrate applications – minimal code required. It has been market leader in middleware Solutions.
A Tibco BW is a Enterprise Application Integration tool for deploying and monitoring processes and managing servers and applications which Supports leading protocols including HTTP/S, SMTP, JDBC, as well as Java, XML and Web Services. Robust exception handling and error reporting throughout design, testing and deployment. It provide fault-tolerance by enabling distribution of tasks and packaged adapter software for integrating applications into infrastructure. Tibco BW training understand and simplifies schema and implementation of SDK for creation of custom adapters. Brief explanation on components like TRA, hawk and third party core libraries.

About the training :
Classes are conducted by Certified Tibco BW Working Professionals with 100 % Quality Assurance. With an experienced Certified practitioner who will teach you the essentials you need to know to kick-start your career on Tibco BW. Our training make you more productive with your Tibco Bi Online Training. Our training style is entirely hands-on. We will provide access to our desktop screen and will be actively conducting hands-on labs with real-time projects.

Key Features:

· Flexible Timings
· Certified & Industry Experts Trainers
· Multiple Training Delivery Models
· Customize Course
· 24/7 Support
· Hands On Experience
· Real Time Use Cases
· Q&A with Trainers
· Small Batches (1to5)
· Flexible Payments
· Job Support
· Placement Assistance.

* Here we provide you some of the important concepts to be covered by our trainers, that will reflect in your Interview. Advanced
Tibco-bw Interview Questions with answers by our experts will give you a glance of what the course is all about.
Education / Microstrategy-architecture by Ttacy341(f): 11:39am On Oct 05, 2017
Microstrategy Intelligence Server

MicroStrategy Architecture

A MicroStrategy system is built around a three-tier or four-tier structure. The diagram below illustrates a four-tier system.

MicroStrategy metadata

MicroStrategy users need connectivity to the metadata so that they can access projects, create objects, and execute reports. MicroStrategy Intelligence Server connects to the metadata by reading the server definition registry when it starts.

MicroStrategy Metadata

What happens when Intelligence Server starts?

When Intelligence Server starts, it does the following:

Initializes internal processing units
Reads from the machine registry which server definition it is supposed to use and connects to the specified metadata database
Loads configuration and schema information for each loaded project
Loads existing report cache files from automatic backup files into memory for each loaded project (up to the specified maximum RAM setting)
Loads schedules
Loads MDX cube schemas
MicroStrategy Service Manager

At TekSlate, we offer resources that help you in learning various IT courses. We avail both written material and demo
video tutorials. To gain in-depth knowledge and be on par with practical experience,
then explore MicroStrategy Training PDF.
Intelligence Server job processing

The following is a high-level overview of the processing that takes place:

A user makes a request from a client application such as MicroStrategy Web, which sends the request to Intelligence Server.
Intelligence Server determines what type of request it is and performs a variety of functions to prepare for processing.
Depending on the request type, a task list is composed that determines what tasks must be accomplished to complete the job, that is, what components the job has to use within the server that handle things like asking the user to respond to a prompt, retrieving information from the metadata repository, executing SQL against a database, and so on. Each type of request has a different set of tasks in the task list.
The components within Intelligence Server perform different tasks in the task list, such as querying the data warehouse, until a final result is achieved.
Those components are the stops the job makes in what is called a “pipeline,” a path that the job takes as Intelligence Server works on it.
The result is sent back to the client application, which presents the result to the user.
Processing report execution

MicroStrategy - Report Execution

Processing object browsing

MicroStrategy - Processing Object Browsing

MicroStrategy User Model

MicroStrategy users

Like most security architectures, the MicroStrategy security model is built around the concept of a user. To do anything useful with MicroStrategy, a user must log in to the system using a login ID and password. The user can then perform tasks such as creating objects or executing reports and documents, and can generally take advantage of all the other features of the MicroStrategy system.
Users are defined in the MicroStrategy metadata, and exist across projects. You do not have to define users for every project you create in a single metadata repository.
Each user has a unique profile folder in each project. This profile folder appears to the user as the “My Personal Objects” folder. By default other users’ profile folders are hidden. They can be viewed by, in the Desktop Preferences dialog box, in the Desktop: Browsing category, selecting the Display Hidden Objects check box.
Administrator is a built-in default user created with a new MicroStrategy metadata repository. The Administrator user has all privileges and permissions for all projects and all objects.
MicroStrategy user groups

A user group (or “group” for short) is a collection of users. Groups provide a convenient way to manage a large number of users.

Instead of assigning privileges, such as the ability to create reports, to hundreds of users individually, you may assign privileges to a group. Groups may also be assigned permissions to objects, such as the ability to add reports to a particular folder.

Controlling access to objects: Permissions

Permissions define the degree of control users have over individual objects in the system. For example, in the case of a report, a user may have permission to view the report definition and execute the report, but not to modify the report definition or delete the report.

While privileges are assigned to users (either individually, through groups, or with security roles), permissions are assigned to objects.

Controlling access to functionality: Privileges

Privileges give users access to specific MicroStrategy functionality. For example, the Create Metric privilege allows the user to use the Metric Editor to create a new metric, and the Monitor Caches privilege allows the user to view cache information in the Cache Monitor.

Defining sets of privileges: Security roles

A security role is a collection of project-level privileges that are assigned to users and groups. For example, you might have two types of users with different functionality needs: the Executive Users who need to run, sort, and print reports, and the Business Analysts who need additional capabilities to drill and change subtotal definitions. In this case, you can create two security roles to suit these two different types of users.

Modes of Authentication

The available authentication modes are:

Standard: Intelligence Server is the authentication authority. This is the default authentication mode.
Anonymous: Users log in as “Guest” and do not need to provide a password. This authentication mode may be required to enable other authentication modes, such as database warehouse or LDAP.
Database warehouse: The data warehouse database is the authentication authority.
LDAP (lightweight directory access protocol): An LDAP server is the authentication authority Single sign-on: Single sign-on encompasses several different third-party authentication methods, including:
Windows authentication: Windows is the authentication authority o Integrated authentication: A domain controller using Kerberos authentication is the authentication authority
Tivoli or SiteMinder: A third-party single sign-on tool, such as Tivoli or SiteMinder, is the authentication authority.
Managing and verifying your licenses

MicroStrategy uses two main categories of licenses:

Named User licenses in which the number of users with access to specific functionality is restricted.
In a Named User licensing scheme, the privileges given to users and groups determine what licenses are assigned to users and groups. Intelligence Server monitors the number of users in your MicroStrategy system with each privilege, and compares that to the number of available licenses.

CPU licenses, in which the number and speed of the CPUs used by MicroStrategy server products are restricted.
When you purchase licenses in the CPU format, the system monitors the number of CPUs being used by Intelligence Server in your implementation and compares it to the number of licenses that you have. You cannot assign privileges related to certain licenses if the system detects that more CPUs are being used than are licensed. For example, this could happen if you have MicroStrategy Web installed on two dual-processor machines (four CPUs) and you have a license for only two CPUs.

Caching

A cache is a result set that is stored on a system to improve response time in future requests. With caching, users can retrieve results from Intelligence Server rather than re-executing queries against a database.

Intelligence Server supports the following types of caches:

Result caches: Report and document results that have already been calculated and processed, that are stored on the Intelligence Server machine so they can be retrieved more quickly than re-executing the request against the data warehouse.
Report caches can only be created or used for a project if the Enable report server caching check box is selected in the Project Configuration Editor under the Caching: Result Caches: Creation category.
The History List is a way of saving report results on a per-user basis. The History List is a folder where Intelligence Server places report and document results for future reference. Each user has a unique History List.
With the History List, users can:

Keep shortcuts to previously run reports, like the Favorites list when browsing the Internet.
Perform asynchronous report execution. For example, multiple reports can be run at the same time within one browser, or pending reports can remain displayed even after logging out of a project.
Element caches: Most-recently used lookup table elements that are stored in memory on the Intelligence Server or MicroStrategy Desktop machines so they can be retrieved more quickly.
When a user runs a prompted report containing an attribute element prompt or a hierarchy prompt, an element request is created.
Object caches: Most-recently used metadata objects that are stored in memory on the Intelligence Server and MicroStrategy Desktop machines so they can be retrieved more quickly.
When you or any users browse an object definition (attribute, metric, and so on), you create what is called an object cache. An object cache is a recently used object definition stored in memory on MicroStrategy Desktop and MicroStrategy Intelligence Server.

Scheduling

Scheduling is a feature of MicroStrategy Intelligence Server that you can use to automate various tasks. Time-sensitive, time-consuming, repetitive, and bulk tasks are ideal candidates for scheduling. Running a report or document is the most commonly scheduled task since scheduling reports, in conjunction with other features such as caching and clustering, can improve the overall performance of the system.

Time-Triggered and Event – Triggered

With a time-triggered schedule, you define a specific date and time at which the scheduled task is to be run. For example, you can execute a particular task every Sunday night at midnight. Time-triggered schedules are useful to allow large, resource-intensive tasks to run at off-peak times, such as overnight or over a weekend.

An event-triggered schedule causes tasks to occur when a specific event occurs. For example, an event may trigger when the database is loaded, or when the books are closed at the end of a cycle.

Clustering

A clustered set of machines provides a related set of functionality or services to a common set of users. MicroStrategy recommends clustering Intelligence Servers in environments where access to the data warehouse is mission-critical and system performance is of utmost importance.

A cluster is a group of two or more servers connected to each other in such a way that they behave like a single server. Each machine in the cluster is called a node. Because each machine in the cluster runs the same services as other machines in the cluster, any machine can stand in for any other machine in the cluster.

Failover support
Load balancing
Project distribution and project failover
MicroStrategy - Clustering

How Microstrategy Desktop Works?

MICROSTRATEGY DESKTOP

The MicroStrategy Desktop interface has three panes:

Folder List: Where all the project folders that hold your reports and report-related objects are accessible. The Folder List displays all the project sources, projects, application and schema object folders, and the administrative functions for your business intelligence system. When all panes are displayed, the Folder List is the center pane of the Desktop interface.

If the Folder List does not automatically appear when you log in to MicroStrategy Desktop, from the View menu select Folder List.

Object Viewer: Where the contents of each folder, such as reports or report objects, are displayed as you browse through folders in the Folder List. The right pane of the MicroStrategy Desktop interface is the Object Viewer.

Shortcut Bar: This pane contains icons that allow you instant access to your favorite or most frequently used folders. Simply click on a shortcut icon to jump immediately to the folder to which it is linked. You can create a shortcut to any folder that appears in your Folder List. You can add or remove shortcuts at any time.

Navigating through Desktop

Use the following menus and tools in MicroStrategy Desktop to access the different reporting features of MicroStrategy.

From the Desktop menus, you can do the following.

Microstrategy desktop

From the Desktop toolbar, you can do the following:

MicroStrategy Desktop Toolbar

Report Editor Interface

Screenshot_40

Report Objects pane: (top left) This pane appears only if you have the MicroStrategy OLAP Services product. Where you can see a summary of all the objects you have included on your report.

There may be more objects in this pane than are displayed on the executed report, because OLAP Services lets analysts quickly remove or add objects from this pane directly to the report template. When the report is executed, the MicroStrategy Engine generates SQL that includes all the objects in this Report Objects pane, not just the objects that are displayed in the report after it is executed.

Object Browser pane: (center left) Where you navigate through the project to locate objects to include on the report.

My Shortcuts pane: (bottom left) Enables you to access any folder in the Object Browser quickly. Creating shortcuts can save you time if you repeatedly browse to the same folders.

View Filter pane: (top right) Where you apply a special kind of filter to any object that is in the Report Objects pane. View filters do not modify the SQL for the report like normal report filters do. Instead, view filters are applied to the overall result set after the SQL is executed and results are returned from the data source. This can help improve report execution performance.

Report Filter pane: (center right) Where you add filtering conditions to a report. Filtering conditions can be made up of attributes, metrics, advanced filter qualifications, and shortcuts to an existing report filter. The Report Filter pane allows you to create a filter without having to open a separate object editor (the Filter Editor). Simple filters can be conveniently created by dragging and dropping objects from the Object Browser into this pane to create a filter.

Report View pane: (bottom right) Where you define your report layouts by dragging and dropping objects from the Object Browser onto this report view pane. You can create a report to serve as a template for other reports;

Page-by pane: (top of Report View pane) Where you place subsets of your report results to be displayed as separate pages of the executed report.

Project objects

Screenshot_41

Attributes

Attributes are the business concepts reflected in your stored business data in your data source. Attributes provide a context in which to report on and analyze business facts or calculations. Attributes are created by the project designer when an organization’s project is first created.

Metrics

Metrics are MicroStrategy objects that represent business measures and key performance indicators. From a practical perspective, metrics are the calculations performed on data stored in your database, the results of which are displayed on a report.

Specifically, metrics define the analytical calculations to be performed against data that is stored in the data source. A metric is made up of data source facts and the mathematical operations to be performed on those facts, so that meaningful business analysis can be performed on the results.

Metric creation is usually the responsibility of advanced analysts.

Metric Editor:

Screenshot_42

Check out the top Advanced MicroStrategy Interview Questions now!
You use the Metric Editor to create and save metrics, and to edit existing metrics. The Metric Editor is accessible from MicroStrategy Desktop and is shown in the image above.
Education / Introduction Sap, Sap Sd, Sap Hana by Ttacy341(f): 7:49am On Oct 03, 2017
Introduction:

SAP: SAP ERP/Business user. From Wikibooks, open books for an open world. < SAP ERP. Business users of SAP® ERP use the system to perform daily operations, such as posting financial documents, creating vendors or customers, or displaying reports.

SAP SD

:Functional [ like MM/PP/SD ] :SAP SD Related to business processes and configuration. Development [ also called technical or ABAP or Netweaver programming ]: Related to programming. Basis/Netweaver/SAP/Technical admin : related to system installation and support.

SAP HANA:

SAP S/4HANA is the short form for “SAP Business Suite 4 (for) SAP HANA” with another code line. It brings enormous flood of SAP advancement to their clients, like the move from SAP R/2 to SAP R/3. It is SAP’s cutting edge business suite and another item completely based on the most exceptional in-memory stage today. And, as per the present-day outline standards this suite came up with SAP Fiori (Fig: 1) client experience (UX). SAP S/4HANA conveys gigantic rearrangements (client appropriation, information model, client experience, basic leadership, business procedures, and models) and developments (Internet of Things, Big Data, business systems, and versatile first). This will give organizations run straightforward in an advanced and arranged economy. SAP at present offer on-Premise, cloud (open and oversaw) and half-breed organizations to give a genuine decision to clients.
Figure 1. SAP Fiori User interface (UX)
SAP S/4HANA additionally gives their clients an alternative to completely influence the new HANA multi-tenancy usefulness as gave by the SAP HANA stage for the cloud.
The platform SAP HANA has been available since 2010, and SAP applications like SAP ERP and the SAP Business Suite have been able to run on the SAP HANA database and/or any other database. However SAP Business Suite 4 only runs on the SAP HANA database, and thus it is packaged as one product: SAP S/4HANA. The offering is meant to cover all mission-critical processes of an enterprise. It integrates functions from lines of businesses as well as industry solutions, and also re-integrates portions of SAP Business Suite products such as SAP SRM, CRM and SCM.
Companies can reduce the complexity of their systems with S/4HANA because the platform integrates people, devices, big data and business networks in real time. We confidently can say that, SAP HANA platform can potentially save an organization 37% across hardware, software, and labor costs.
Education / Microsoft SQL Server Was 2016's Fastest Growing Database by Ttacy341(f): 5:38pm On Sep 28, 2017
Dive Brief:

Microsoft’s SQL Server was the fastest growing database product in 2016, according to new research from Austrian consulting company Solid IT.

Oracle remains the most popular database overall, and was the fastest growing in 2015.

Microsoft SQL held the number three spot in the rankings for the last several years.

Dive Insight:

The rise of open source database products may soon overthrow both Oracle and Microsoft, however. SolidIT found overall, open source databases are growing faster than commercial databases. Even Oracle’s open source database, MySQL, is almost as commonplace as its proprietary solution.

Opening up to open source may have helped propel Microsoft’s popularity this year. In March, Microsoft announced that it would release a version of SQL Server that runs on the open source Linux operating system, and it appears bringing SQL Server to Linux opened up a big new market for the product.

Overall, Microsoft has successfully stage a resurgence over the last few years, and moves like becoming Linux-friendly is helping the company attract more large business customers.
Education / Orient Me, Elasticsearch And Disk Space by Ttacy341(f): 8:39am On Sep 26, 2017
With IBM Connections 6 you can deploy the additional component Orient Me, which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Klaus Bild showed in a blog post some weeks ago how to add a container with Kibana to use the deployed Elasticsearch for visualizing the environment.

I found two issues with the deployed Elasticsearch container, but let me explain from the beginning.

On Monday I checked my demo server and the disk was full, so I searched a little bit and found that Elasticsearch is using around 50GB of disk space for the indices. On my server the data path for Elasticsearch is /var/lib/elasticsearch/data. With du -hs /var/lib/* you can check the used space.

You will see something like this and I would recommend to create a seperate mount point for /var/lib or two on /var/lib/docker and /var/lib/elasticsearch for your CFC/Orient Me server:

du -hs /var/lib/*
...
15G /var/lib/docker
0 /var/lib/docker.20170425072316
6,8G /var/lib/elasticsearch
451M /var/lib/etcd
...
So I searched how to show and delete Elasticsearch indices.

On your CFC host run:

curl localhost:9200/_aliases
or

[root@cfc ~]# curl http://localhost:9200/_aliases?pretty=1
{
"logstash-2017.06.01" : {
"aliases" : { }
},
"logstash-2017.05.30" : {
"aliases" : { }
},
"logstash-2017.05.31" : {
"aliases" : { }
},
".kibana" : {
"aliases" : { }
},
"heapster-2017.06.01" : {
"aliases" : {
"heapster-cpu-2017.06.01" : { },
"heapster-filesystem-2017.06.01" : { },
"heapster-general-2017.06.01" : { },
"heapster-memory-2017.06.01" : { },
"heapster-network-2017.06.01" : { }
}
}
}
On my first try, the list was “a little bit” longer. So it is a test server, so I just deleted the indices with:

curl XDELETE http://localhost:9200/logstash-*
curl XDELETE http://localhost:9200/heapster-*
For this post, I checked this commands from my local machine and curl XDELETE ... with IP or hostname are working too! Elasticsearch provides no real security for the index handling, so best practice is to put a Nginx server in front and only allow GET and POST on the URL. So in a production environment, you should think about securing the port 9200 (Nginx, iptables), or anybody could delete the indices. Only logs and performance data, but I don’t want to allow this.

Now the server was running again and I digged a little bit deeper. So I found that there is a container indices-cleaner running on the server:

[root@cfc ~]# docker ps | grep clean
6c1a52fe0e0e ibmcom/indices-cleaner:0.1 "cron && tail -f /..." 51 minutes ago Up 51 minutes k8s_indices-cleaner.a3303a57_k8s-elasticsearch-10.10.10.215_kube-system_62f659ecf9bd14948b6b4ddcf96fb5a3_0b3aeb84
So I checked this container:

docker logs 6c1a52fe0e0e
shows nothing. Normally it should show us the curator log. The container command is not selected in the best way.

cron && tail -f /var/log/curator-cron.log
shall show the log file of curator (a tool to delete Elasticsearch indices), but with && it only starts tail when cron is ended with status true. So that’s the reason that docker logs shows nothing.

I started a bash in the container with docker exec -it 6c1a52fe0e0e bash and checked the settings there.

cat /etc/cron.d/curator-cron
59 23 * * * root /bin/bash /clean-indices.sh
# An empty line is required at the end of this file for a valid cron file.
There is a cronjob which runs each day at 23:59. The started script runs:

/usr/local/bin/curator --config /etc/curator.yml /action.yml
Within the /action.yml the config shows that logstash-* should be deleted after 5 days and heapster-* after 1 day.

I checked /var/log/curator-cron.log, but it was empty! So the cronjob never ran. To test if the script works as expected, I just started /clean-indices.sh and the log file shows:

cat /var/log/curator-cron.log
2017-05-31 08:17:01,654 INFO Preparing Action ID: 1, "delete_indices"
2017-05-31 08:17:01,663 INFO Trying Action ID: 1, "delete_indices": Delete logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:01,797 INFO Deleting selected indices: [u'logstash-2017.05.08', u'logstash-2017.05.09', u'logstash-2017.05.03', u'logstash-2017.04.28', u'logstash-2017.04.27', u'logstash-2017.04.26', u'logstash-2017.05.18', u'logstash-2017.05.15', u'logstash-2017.05.12', u'logstash-2017.05.11']
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.08
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.09
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.03
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.28
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.27
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.26
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.18
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.15
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.12
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.11
2017-05-31 08:17:02,130 INFO Action ID: 1, "delete_indices" completed.
2017-05-31 08:17:02,130 INFO Preparing Action ID: 2, "delete_indices"
2017-05-31 08:17:02,133 INFO Trying Action ID: 2, "delete_indices": Delete heapster prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:02,161 INFO Deleting selected indices: [u'heapster-2017.04.26', u'heapster-2017.04.27', u'heapster-2017.04.28', u'heapster-2017.05.03', u'heapster-2017.05.15', u'heapster-2017.05.12', u'heapster-2017.05.11', u'heapster-2017.05.09', u'heapster-2017.05.08']2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.26
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.27
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.28
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.03
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.15
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.12
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.11
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.09
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.08
2017-05-31 08:17:02,366 INFO Action ID: 2, "delete_indices" completed.
2017-05-31 08:17:02,367 INFO Job completed.
I checked the log file daily after the research and after running the task manually the cron job is working as expected and curator does it’s job. No full disk since last week.

CFC uses kubernetes and so stopping the clean-indices container creates a new one immediately! All changes disappear then and the cronjob stops working. I don’t want to wait until IBM provides a container update, so I searched a way to run the curator even with a new container on a regular basis.

I created a script:

#!/bin/bash
id=`docker ps | grep indices-cleaner | awk '{print $1}'`
docker exec -t $id /clean-indices.sh
docker exec -t $id tail /var/log/curator-cron.log
and added it to my crontab on the CFC server.

crontab -e 59 23 * * * script >> /var/log/curator.log
When you use Kibana to analyse the logs, you maybe want to have more indices available. docker inspect containerid shows us:

"Mounts": [
{
"Type": "bind",
"Source": "/etc/cfc/conf/curator-action.yml",
"Destination": "/action.yml",
"Mode": "",
"RW": true,
"Propagation": ""
},

source : https://www.stoeps.de
Education / Loops In Teradata by Ttacy341(f): 2:06pm On Sep 19, 2017
If<Condition>Then<Statement1>;

Else<Statment2>;

End If



Description

If the condition is successes if executes statement1, other wise executes statement2



While loop

While(<Condition>winkDo

<SQL Statements>

End while;



Description

If repeats in till condition fails



Looping[For Loop]

Loop Label: Label- Name

<Statements>

End loop Label- Name



Description

Till the condition False the loop will be Repeated

These core tutorials will help you to learn the Loops in Teradata. For an in-depth understanding and practical
experience, explore Teradata Training PDF.


Declaring variable

Syntax – Declare<Variable name><Data type>;

Ex – Declare Dept name Varchar(30);

Assigning values to variables;



Syntax – Set < Varchar >=<value>or<variable>or<columnname>

Ex

Set Dept name=D Name1;
Set Dept name= IT


Format command

9 2 Digit

Z 2 Zero Suppressed Digit

$2Under score etc

# 2 Formatting string

X 2 Single character

B 2 BLANK

Formatting Dates

DD 2 DAY

MMMM 2 Full month

YY 2 Year

YYYY 2 Full year

B 2 Blank etc…

85000 2 $9999 2 $85000

Learn more about Teradata Interview Questions in this blog post.


Screenshot_84



Note Formatting command we can execute only in BTEQ



for more info about : Teradata
Education / Spark Technology by Ttacy341(f): 2:04pm On Sep 14, 2017
Introduction
Big data and data science are enabled by scalable, distributed processing frameworks that allow organizations to analyze petabytes of data on large commodity clusters. MapReduce (especially the Hadoop open-source implementation) is the first, and perhaps most famous, of these frameworks.

Apache Spark is a fast, in-memory data processing engine with elegant and expressive development APIs in Scala, Java, Python, and R that allow data workers to efficiently execute machine learning algorithms that require fast iterative access to datasets (see Spark API Documentation for more info). Spark on Apache Hadoop YARN enables deep integration with Hadoop and other YARN enabled workloads in the enterprise.

Apache Spark is a general-purpose distributed computing engine for processing and analyzing large amounts of data. Though not as mature as the traditional Hadoop MapReduce framework, Spark offers performance improvements over MapReduce, especially when Spark’s in-memory computing capabilities can be leveraged.

Spark programs operate on Resilient Distributed Datasets, which the official Spark documentation defines as “a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel.”

MLlib is Spark’s machine learning library, which we will employ for this tutorial. MLlib includes several useful algorithms and tools for classification, regression, feature extraction, statistical computing, and more.


Concepts

At the core of Spark is the notion of a Resilient Distributed Dataset (RDD), which is an immutable collection of objects that is partitioned and distributed across multiple physical nodes of a YARN cluster and that can be operated in parallel.

Typically, RDDs are instantiated by loading data from a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat on a YARN cluster.

Once an RDD is instantiated, you can apply a series of operations. All operations fall into one of two types: transformations or actions. Transformation operations, as the name suggests, create new datasets from an existing RDD and build out the processing Directed Acyclic Graph (DAG) that can then be applied on the partitioned dataset across the YARN cluster. An Action operation, on the other hand, executes DAG and returns a value.

for more info : https://tekslate.com/tutorials/spark/

Education / Hadoop Tutorial by Ttacy341(f): 11:56am On Sep 13, 2017
BigData

Big Data is a term that represents data sets whose size is beyond the capacity of commonly used software tools to manage and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set. It is the term for a collection of data sets, so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.

Bigdata is term which defines three characteristics

Volum
Velocity
Variety
Already we have RDBMS to store and process structured data. But of late we have been getting data in form of videos, images and text. This data is called as unstructured data and semi-structured data. It is difficult to efficiently store and process these data using RDBMS. So definitely, we have to find an alternative way to store and to process this type of unstructured and semi-structured data.

HADOOP is one of the technologies to efficiently store and to process large set of data. This HADOOP is entirely different from Traditional distributed file system. It can overcome all the problems exits in the traditional distributed systems. HADOOP is an open source framework written in java for storing data in a distributed file system and processing the data in parallel manner across cluster of commodity nodes.

The Motivation for Hadoop


What problems exist with ‘traditional’ large-scale computing systems?

What requirements an alternative approach should have?

How Hadoop addresses those requirements?

Problems with Traditional Large-Scale Systems

Traditionally, computation has been processor-bound Relatively small amounts of data
For decades, the primary push was to increase the computing power of a single machine
Distributed systems evolved to allow developers to use multiple machines for a single job
Distributed Systems: Data Storage

Typically, data for a distributed system is stored on a SAN
At compute time, data is copied to the compute nodes
Fine for relatively limited amounts of data
Distributed Systems: Problems

Programming for traditional distributed systems is complex
Data exchange requires synchronization
Finite bandwidth is available
Temporal dependencies are complicated
It is difficult to deal with partial failures of the system
The Data-Driven World

Modern systems have to deal with far more data than was the case in the past
Organizations are generating huge amounts of data
That data has inherent value, and cannot be discarded
Examples: Facebook -over 70PB of data, EBay -over SPB of data etc

Many organizations are generating data at a rate of terabytes per day. Getting the data to the processors becomes the bottleneck.

Requirements for a New Approach

Partial Failure Support

The system must support partial failure
Failure of a component should result in a graceful degradation of application performance. Not complete failure of the entire system.
Data Recoverability

If a component of the system fails, its workload should be assumed by still-functioning units in the system
Failure should not result in the loss of any data
Component Recovery

If a component of the system fails and then recovers, it should be able to rejoin the system.
Without requiring a full restart of the entire system
Consistency

Component failures during execution of a job should not affect the outcome of the job

Scalability

Adding load to the system should result in a graceful decline in performance of individual jobs Not failure of the system

Increasing resources should support a proportional increase in load capacity.

Hadoop’s History
Hadoop is based on work done by Google in the late 1990’s to early 2000.
Specifically, on papers describing the Google File System (GFS) published in 2003, and MapReduce published in 2004.
This work takes a radical new approach to the problems of Distributed computing so that it meets all the requirements of reliability and availability.
This core concept is distributing the data as it is initially stored in the system.
Individual nodes can work on data local to those nodes so data cannot be transmitted over the network.
Developers need not to worry about network programming, temporal dependencies or low level infrastructure.
Nodes can talk to each other as little as possible. Developers should not write code which communicates between nodes.
Data spread among the machines in advance so that computation happens where the data is stored, wherever possible.
Data is replicated multiple times on the system for increasing availability and reliability.
When data is loaded into the system, it splits the input file into ‘blocks ” typically 64MB or 128MB.
Map tasks generally work on relatively small portions of data that is typically a single block.
A master program allocates work to nodes such that a map task will work on a block of data stored locally on that node whenever possible.
Nodes work in parallel to each of their own part of the dataset.
If a node fails, the master will detect that failure and re-assigns the work to some other node on the system.
Restarting a task does not require communication with nodes working on other portions of the data.
If failed node restarts, it is automatically add back to the system and will be assigned with new tasks.
Hadoop Overview
Hadoop consists of two core components

HDFS
MapReduce
There are many other projects based around core concepts of Hadoop. All these projects are called as Hadoop Ecosystem.

Hadoop Ecosystem has

Pig

Hive Flume

Sqoop Oozie

and so on…

A set of machines running HDFS and MapReduce is known as hadoop cluster and Individual machines are known as nodes. A cluster can have as few as one node or as many as several thousands of nodes. As the no of nodes are increased performance will be increased. The other languages except java (C++, RubyOnRails, Python, Perl etc … ) that are supported by hadoop are called as HADOOP Streaming.

HADOOP S/W AND H/W Requirements

Hadoop useally runs on opensource os’s (like linux, ubantu etc) o Centos/RHEl is mostly used in production

If we have Windows it require virtualization s/w for running other os on windows o Vm player/Vm workstation/Virtual box

Java is prerequisite for hadoop installation

For more info: https://tekslate.com/hadoop-training/

Programming / IBM Launches E-commerce Practice Focused On Retail Operation Integration by Ttacy341(f): 6:44pm On Sep 10, 2017
Hoping to grab some of the US$70 billion yearly worldwide market of electronic commerce software and services, IBM has launched an e-commerce practice focused on retail operation integration and analytics.
"Through technology, consumers have gotten much more empowered than they have ever been before," for the retail industry, said Craig Hayman, the general manager of IBM Unica Campaign Software Industry Solutions, who heads the new practice.
As a result, retail companies need more information about how well their products and services are faring in this fiercely competitive marketplace, he argued.
IBM's "Smarter Commerce" initiative ties together a number of customized IBM software products and associated services that should help retailers better engage with customers and potential customers.
For this initiative, the company has dedicated 1,200 personnel for e-commerce consulting services. It has also assembled a training program for clients and business partners.
On the software side, IBM is integrating software from a number of e-commerce tool vendors it has acquired over the past few years. In 2010 alone, IBM spent over $2 billion acquiring e-commerce software vendors. It paid $1.4 billion for Sterling Commerce, which provides software for integrating back-end retail systems. Also last year, IBM acquired Coremetrics, which offered a set of cloud services that analyze how well marketing campaigns worked.
Finally, IBM acquired Unica, which provided marketing campaign automation software. IBM also plans to use its own IBM's WebSphere Commerce platform for its e-commerce packages.

(1) (of 1 pages)

(Go Up)

Sections: politics (1) business autos (1) jobs (1) career education (1) romance computers phones travel sports fashion health
religion celebs tv-movies music-radio literature webmasters programming techmarket

Links: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

Nairaland - Copyright © 2005 - 2024 Oluwaseun Osewa. All rights reserved. See How To Advertise. 309
Disclaimer: Every Nairaland member is solely responsible for anything that he/she posts or uploads on Nairaland.