Online Certificate Course in Credit Analysis Archives - DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA

Developing a Big Data Culture is Crucial for Banks to be Customer Centric

Developing a Big Data Culture is Crucial for Banks to be Customer Centric

It is important for banks to be customer centric because a customer who is better engaged is easier to retain and do business with. In order to provide services that are valued by customers, banks need to exploit big data. When big data is combined with analytics it can result in big opportunities for banks. The volume of banking customers is on the rise, and so is their data. It is time for the banking sector to look beyond traditional approaches and adopt new technologies powered by big data, like natural language processing and text mining, which help convert large amount of unstructured information into meaningful data that can lead to valuable insights.

Switching to big data enable banks to get a 360-degree view of their customers and keep providing excellent services. Many banks, like the Bank of America and U.S. Bank, have implemented big data analytics and are reaping its benefits. Rabobank, which has adopted big data analytics to detect criminal activity in ATMs, is ranked among the top 10 safest banks in the world.

Big Data’s Advantages for the Banking Industry:

  • Streamline Work Process and Service Delivery:

Banks need to filter through gazillions of data sets in order to provide relevant information to a customer, when he/she enters account details into the system. Big data can speed up this process. It enables financial institutes of spot and correct problems, before they affect clients. Big data also helps in cost-cuttings, which in turn lead to higher revenues for banks.

In case of erroneous clients, who tend to go back on their decisions, big data can help alter the process of  service delivery in such a manner that these clients are bound to stick to their commitments. It allows banks to track credit and loan limits, so that customers don’t exceed them.

Cloud based analytics packages sync in with big data systems to provide real-time evaluation. Banks can sift through tons of client information to track transactional behaviors in real time and provide relevant resources to clients. Real-time client contact is very useful in verifying suspicious transactions.

  • Customer Segmentation:

Big data help banks understand customer spending habits and determine their needs. For example, when we use our credit cards to purchase something, banks acquire information about what we purchase, how much we spend and use these information to provide relevant offers to us. Through big data, banks are able to trace all customer transactions and answer questions about a customer, like which services are commonly accessed, what are the preferred credit card expenditures and what his/her net worth is. The advantage of customer segmentation is that it enables banks to design marketing campaigns that cater to specific needs of a customer.  It can be used to deliver personalized schemes and plans. Analyzing the past and present expenses of a client helps bank create meaningful client relationships and improve response rates.

Let’s Take Your Data Dreams to the Next Level

  • Fraud detection:

According to Avivah Litan, a financial fraud expert at Gartner, big data supports behavioral authentication, which can help prevent fraud. Litan says, ‘’using big data to track such factors as how often a user typically accesses an account from a mobile device or PC, how quickly the user types in a username and password, and the geographic location from which the user most often accesses an account can substantially improve fraud detection.’’

Utah-based Zions Bank is largely dependent on big data to detect fraud. Big data can detect a complex problem like cross-channel fraud by aggregating fraud alerts from multiple disparate data sources and deriving meaningful insights from them. 

  • Risk Management:

Financial markets are becoming more and more interconnected, which increases their risk. Big data plays a pivotal role in risk management of financial sector as it provides more extensive risk coverage and faster responses. It helps create robust risk prediction models that evaluate credit repayment risks or determine the probability of default on loans for customers. It also aids in identifying risk associated with emergent financial technologies.

Hence, banks need to adopt a big data culture to improve customer satisfaction, keep up with global trends and generate higher revenues.

For credit risk management courses online, visit DexLab Analytics. It is a leading institute offering credit risk analytics training in Delhi.

 

Interested in a career in Data Analyst?

To learn more about Machine Learning Using Python and Spark – click here.

To learn more about Data Analyst with Advanced excel course – click here.
To learn more about Data Analyst with SAS Course – click here.
To learn more about Data Analyst with R Course – click here.
To learn more about Big Data Course – click here.

Credit Risk Modelling: A Basic Overview

Credit Risk Modelling: A Basic Overview

HISTORICAL BACKGROUND

The root cause for the Financial Crisis which stormed the globe in 2008 was the Sub-prime crisis which appeared in USA during late 2006. A sub-prime lending practice started in USA during 2003-2006. During the later parts of 2003, the housing sector started expanding and housing prices also increased. It has been shown that the housing prices were growing exponentially at that time. As a result, the housing prices followed a super-exponential or hyperbolic growth path. Such super-exponential paths for asset prices are termed as ‘bubbles’ So USA was riding a Housing price bubble. Now the bankers, started giving loans to the sub-prime segments. This segment comprised of customers who hardly had the eligibility to pay back the loans. However, since the loans were backed by mortgages bankers believed that with housing price increases the they could not only recover the loans but earn profits by selling off the houses. The expectations made by the bankers that asset prices always would ride the rising curve was erroneous. Hence, when the housing prices crashed the loans were not recoverable. Many banks sold off these loans to the investment banks who converted the loans into asset based securities. These assets based securities were disbursed all over the globe by the investments banks, the largest being done by Lehmann Brothers. When the underlying assets went valueless and the investors lost their investments, many of the investment banks collapsed. This caused the Financial Crisis and a huge loss of investors and tax-payers wealth. The involvement of Systematically Important Financial Institutions (SIFIs) and Globally Systematically Important Financial Institutions (G-SIFIs) into the frivolous lending process had amplified the intensity and the exposure of the crisis.

Understanding Credit Risk Management With Modelling and Validation – @Dexlabanalytics.

SYSTEMATICALLY IMPORTANT FINANCIAL INSTITUTIONS AND THEIR ROLE IN SYSTEMIC STABILITY

A Systematically Important Financial Institution (SIFI) is a bank, insurance company, or other financial institutions whose failure might trigger a financial crisis.

If a SIFI has the capacity to bring in a recession across the globe then it is known as a Globally Systematically Important Financial Institution (G-SIFI). The Basel Committee follows an indicator based approach for assessing the systematic importance of the G-SIFIs. The basic tenets of this approach are:

  1. The BASEL committee is of the view that the global systemic importance should be measured in terms of the impact that a failure of a bank can have on the global financial system and wider economy rather than the risk that the failure can occur. So, the concept is more of a global, system wide, loss given default (LGD) concept rather than a probability of default (PD) problem.
  2. The indicators reflect the following metrics: size of banks, their interconnectedness, the lack of availability of substitutable or financial institution infrastructure for provided services, their global activity, their complexity etc. Each of these are defined as:

(i) Cross-Jurisdiction: The indicator captures the global footprints of the banks. This indicator is divided into two activities: Cross Jurisdictional claims and Cross Jurisdictional liabilities. These two indicators measure the banks activities outside its home relative to overall activity of other banks’ in the sample. The greater the global reach of the bank, the more difficult is it to coordinate its resolution and the more widespread the spill over effects from its failure.

(ii) Size: Size of a bank is measured using the total exposure that it has globally. This is the exposure measure used to calculate Leverage ratio. BASEL III paragraph 157 uses a particular definition of exposure for this purpose. The score of each bank for this criterion is calculated as its amount of total exposure divided by the sum of total exposures of all banks in the sample.

(iii) Interconnectedness: Financial distress at one institution can materially raise the likelihood of distress at other institutions given the contractual obligations in which the firms operate. Interconnectedness is defined in terms of the following parameters: (a) Inter-financial system assets (b) Inter-financial system liabilities (c) The degree to which a bank funds itself from the other financial systems.

(iv) Complexity: The systemic impact of a bank’s distress or failure is expected to be positively related to its overall complexity. Complexity includes: business, structural and operational complexity. The more complex the bank is the greater are the costs and time needed to resolve the banks.

Given these characteristics, it was important to apply different restrictions to keep the lending practices of the banks under control. Frivolous lending done by such SIFIs had resulted in the financial crisis 2008-09. Post the crisis, regulators became more vigilant about maintaining appropriate reserves for banks to survive macroeconomic stress scenarios. Three major sources of risks to which banks are exposed to are: 1. Credit Risk 2. Market Risk 3. Operational Risk. Several regulations

have been imposed on banks to ensure that they are adequately capitalised. The major regulatory requirements to which banks need to be compliant with are:

  1. BASEL 2. Dodd Frank Act Stress Testing 3. Comprehensive Capital Adequacy Review.

Before looking into the Regulatory frameworks and their impact on the Credit Risk modelling, let us form an understanding of the framework of the Bank Capital.

Risk Management in a Commercial Lending Portfolio with Time Series and Small Datasets – @Dexlabanalytics.

CAPITAL STRUCTURE OF BANKS

The bank’s capital structure is comprised of two main components: 1. Equity Capital of Banks 2. Supplementary capital of banks. The Equity capital of banks are the purest form of banking capital. This is true or the actual capital that a bank has and it has been raised from the shareholders. The supplementary capital of banks comprises of estimated capital such as allowances, provisions etc. This portion of the capital can easily be tampered by the management to meet undue shareholders expectations or unnecessarily over reserve capital. Thus, there are strong capital norms and regulations around the supplementary capital. The two tiers of capital are: Tier1 and Tier2 capital. Tier1 capital is also decomposed into two parts: Tier1 Common capital and Tier1 capital.

 

Tier1 common capital = Common shareholder’s equity-goodwill-Intangibles. Goodwill and intangibles are no physical capital. In scenarios, where the goodwill and intangible assets are stressed, the capital in the banks would deteriorate. Therefore, they cannot be added to the company’s tier1 capital. Only the core or the physical amount of capital present in the bank account is the capital.

Tier1 Capital = Total Shareholders’ equity (Common + Preffered stocks) -goodwill -intangibles + Hybrid securities.

Tier 1 is the core equity capital for the bank. The components of Tier1 capital are common across all geographies for the banking system. Equity capital includes issued and fully paid equities. This is the purest form of capital that the bank has.

Tier2 Capital: tier 2 capital comprises of estimated reserves and provisions. This is the part of capital which is used to cushion against expected losses. Tier 2 capital has the following composition:Tier 2 = Subordinated debts +Allowances for Loans and lease losses + Provisions for bad debts -> This portion of the capital is reserved out of profits. Hence,

managers always try to under report these parameters to meet shareholder’s expectations. However, under reserving often poses the chances of bankruptcies or regulatory penalties. Total Capital of a Bank = Tier 1 capital + Tier 2 Capital

Explaining the Everlasting Bond between Data and Risk Analytics – @Dexlabanalytics.

CALCULATION OF CAPITAL RATIOS

Every bank faces three main types of risks: 1. Credit risk 2. Market Risk 3. Operational risk. Credit Risk is the risk that arises from lending out funds to borrowers, given their chances of defaulting on loans. Market Risk is the risk that the bank faces due to market fluctuations like stock price changes, interest rate risk and price level fluctuation etc. Operational risk occurs as a failure of the operational processes. The exposure of the banks to these risks differ from bank to bank. So the capital that they to set aside would differ based on the exposure to risk. Therefore, regulators have defined a metric called Risk Weighted Assets (RWA) to identify the exposure of the bank’s assets to risk. Every bank must keep aside their capital relative to the exposure of their asset to risk. The biggest advantage of RWAs is that they not only include On-balance sheet items but off-balance sheet items as well. Banks need to maintain their Tier1 common capital, tier1 capital and tier2 capital relative to their RWAs. Thus, arises the Capital ratios.

 

Total RWA = RWA for Credit Risk + RWA for Market Risk + RWA for Operational Risk

Tier1 Common Capital Ratio = tier1 common capital / RWA (CR + MR + OR)

Tier1 Capital Ratio = Tier1 Capital / RWA (CR+MR+OR)

Total Capital Ratio = Total capital/ RWA(CR+MR+OR)

Leverage Ratio = Tier1 Capital / Firms consolidated assets

Regulators require some critical cut-offs for each of these ratios:

Tier1 Common Capital Ratio > = 2% all times

Tier1 Ratio >= 4% all times

Tier 2 capital cannot exceed Tier1 capital

Leverage ratio > = 3% of all times.

 

In the next blog we explore how the credit risk models help in ensuring the capital adequacy of the banks and in the business risk management.

 

Looking for credit risk analysis course online? Drop by DexLab Analytics – it offers excellent credit risk analysis course at affordable rates.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Banks Merged With Fintech Startups to Perform Better Digitally

Axis Bank has acquired FreeCharge, a mobile wallet company opening doors to many such deals in the future. As a consequence, do you think banks and fintech startups have started working towards a common goal?

 
Banks Merged With Fintech Startups to Perform Better Digitally
 

On some day in the early 2016, Rajiv Anand, the Executive Director of Retail Banking at Axis Bank, asked his team who were hard at work, “Do present-day customers know how a bank would look in the future?”

Continue reading “Banks Merged With Fintech Startups to Perform Better Digitally”

The Opportunities and Challenges in Credit Scoring with Big Data

The Opportunities and Challenges in Credit Scoring with Big Data

Within the past few decades, the banking institutions have collected plenty of data in order to describe the default behaviour of their clientele. Good examples of them are historical data about a person’s date of birth, their income, gender, status of employment etc. the whole of this data has all been nicely stored into several huge databases or data warehouses (for e.g. relational).

And on top of all this, the banks have accumulated several business experiences about their crediting products. For instance, a lot of credit experts have done a pretty swell job at discriminating between low risk and high risk mortgages with the use of their business mortgages, thereby making use of their business expertise only. It is now the goal of all credit scoring to conduct a detailed analysis of both the sources of data into a more detailed perspective with then come up with a statistically based decision model, which allows to score future credit applications and then ultimately make a decision about which ones to accept and which to reject.

With the surfacing of Big Data it has created both chances as well as challenges to conduct credit scoring. Big Data is often categorised in terms of its four Vs viz: Variety, Velocity, Volume, and Veracity. To further illustrate this, let us in short focus into some key sources or processes, which will generate Big Data.  

The traditional sources of Big Data are usually large scale transactional enterprise systems like OLTP (online Transactional Processing), ERP (Enterprise Resource Processing) and CRM (Customer Relationship Management) applications. The classical credit is generally constructed using the data extracted from these traditional transactional systems.

However, the online graphing is more recent example. Simply think about the all the major social media networks like, Weibo, Wechat, Facebook, Twitter etc. All of these networks together capture the information about close to two billion people relating to their friends preferences and their other behaviours, thereby leaving behind a huge trail of digital footprint in the form of data.

Also think about the IoT (the internet of things) or the emergence of the sensor enable ecosystems which is going to link the various objects (for e.g. cars, homes etc) with each other as well as with other humans. And finally, we get to see a more and more transparent or public data such as the data about weather, maps, traffic and the macro-economy. It is a clear indication that all of these new sources of generating data will offer a tremendous potential for building better credit scoring models.

The main challenges:

The above mentioned data generating processes can all be categorised in terms of their sheer volume of the data which is being created. Thus, it is evident that this poses to be a serious challenge in order to set up a scalable storage architecture which when combined with a distributed approach to manipulate data and query will be difficult.

Big Data also comes with a lot of variety or in several other formats. The traditional data or the structured data, such as customer name, their birth date etc are usually more and more complementary with unstructured data such as images, tweets, emails, sensor data, Facebook pages, GPS data etc. While the former may be easily stored in traditional databases, the latter needs to be accommodated with the use of appropriate database technology thus, facilitating the storage, querying and manipulation of each of these types of unstructured data. Also it requires a lot of effort since it is thought to be that at least 80 percent of all data in unstructured.

The speed at which data is generated is the velocity factor and it is at that perfect speed that it must be analysed and stored. You can imagine the streaming applications like on-line trading platforms, SMS messages, YouTube, about the credit card swipes and other phone calls, these are all examples of high velocity data and form an important concern.

Veracity which is the quality or trustworthiness of the data, is yet another factor that needs to be considered. However, sadly more data does not automatically indicate better data, so the quality of data being generated must be monitored closely and guaranteed.

So, in closing thoughts as the velocity, veracity, volume, and variety keeps growing, so will the new opportunities to build better credit scoring models.     

Looking for credit risk modelling courses? Take up our credit risk management course online or classroom-based from DexLab Analytics and get your career moving….

 

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Call us to know more