Improving the Efficiency of Medical Malpractice Attorneys in Collecting, Processing, and Analyzing Data Via a Software-Assisted Solution
##plugins.themes.bootstrap3.article.main##
Technology enables more complicated problems to be solved quicker and at lower costs by lowering the labor. Technology is also assisting in the development of decision-making through the gathering and analysis of enormous volumes of data. Everywhere, technology is utilized to address difficult problems, yet for some reason, the medical malpractice sector lags far behind since doing so implies risk. Big Data is the most recent development in information technology, and Artificial Intelligence can imitate human intelligence processes in machines. Big Data delivers speed, precision, and efficiency. This paper has presented a solution based on artificial intelligence and big data analytics tools to assist medical malpractice attorneys greatly reduce time spent on collecting, processing, and analyzing data during manual legal research process. This software solution can successfully address the underlying problem of legal research process which takes longer time by medical malpractice attorneys. This paper has shown how to combine historical court cases, legal texts, rules and regulations, and social media can be gathered, examined, and analyzed via the prospective algorithms.
Downloads
Introduction
Technology makes it possible to address more complicated problems quickly and more affordably by improving the efficiency. People are benefiting from a better quality of life and chances for a variety of services or commodities. Technology is used everywhere to solve complex problems, yet for some reason, the medical malpractice industry lags far behind in implementing new technologies since doing so involves risk. Medical malpractice claims are difficult legal actions, and it can happen for a number of reasons. To initiate a suit in civil court for medical negligence, there are precise stages that must be followed and they differ from state to state [1]. The intensity and complexity of the case, which is a reasonably frequent occurrence, should determine how medical malpractice rules are changed [2]. The medical malpractice trials have so far been unable to establish simple responsibility or reasonable benefit pricing. The lawsuits cost companies and individuals millions of dollars. Although the laws often represent the same legal environment, medical malpractice proceedings are complicated and not standardized, and there are changes in the intricacies of the legal process from state to state and jurisdiction to jurisdiction. The problem is that currently the legal research process taking longer time by medical malpractice attorneys because they rely more on manually collecting, processing, and analyzing data during research legal process.
Big Data Analytics related technologies have brought speed, accuracy, and efficiency in processing vary large volume of data. Artificial Intelligence technology can simulate human intelligence processes to machines. These are the newest invention in information technology. These new technologies can also be utilized to resolve many challenges faced by the medical malpractice attorneys, to enable them more cost-effectively collecting, processing, and analyzing the data needed for handling their customers’ legal claims. There is a lot of hope that the use of artificial intelligence would significantly advance every aspect of healthcare, from diagnosis to therapy. Most people agree that AI tools can support and improve human work [3].
Many medical malpractice lawsuits cost millions of dollars, and the main reasons are because the law either establishes rights of persons, businesses, and many other different entities like State Laws, Federal Laws, or private contracts and, in some situations, more than one of these legal regulations. But because they may not have encountered them in the past, the majority of individuals are unaware of these regulations or laws. They worry as soon as they encounter those circumstances, which may be a simple answer, but they choose to counsel expensive attorneys instead. By using legal texts and publicly available case histories, this research seeks to comprehend a legal issue and identify potential software-assisted algorithms for medical malpractice cases.
This study’s main goal is to develop a solution based on artificial intelligence and big data analytics tools to significantly reduce the time spent gathering, processing, and analyzing data during the legal research process to improve efficiency [4].
Related Works
Technology for Legal Prediction
In the legal research process, medical malpractice attorneys are increasingly dependent on manual data collection, processing, and analysis, which adds time to the process [5]. We are increasingly turning to artificially intelligent computers to make our decision-making more precise, efficient, and, given the appropriate circumstances, more equitable, in fields as diverse as finance, forest management, national security, and medicine [6]. As costs for many services and commodities decreased, possibilities and the quality of life for the average individual improved. But up until now, the legal profession has eluded straightforward responsibility and provided advantages at a fair price. Over the next few years, there will undoubtedly be a change in how lawyers conduct their profession because of artificial intelligence. Artificial intelligence (AI) has the capacity to analyze legal material based on semantics and create legal predictions from the legal data set, helping the judicial system in automation and consequently boosting efficiency within a reasonable budget [5].
To address the issue of taking longer time by medical malpractice attorneys for manual research, a suggested solution was developing a tool based on artificial intelligence and big data analytics tools [7]. Although the standards typically reflect the same legal background, the legal process varies from state to state and jurisdiction to jurisdiction. Even in cases that end in settlement, the overall expenses of litigation are significantly more than the sum of the attorneys’ fees [8].
Technology is also helping to improve decision-making by collecting and analyzing a massive amount of data [9] [10]. Researchers from Princeton University, the University of Pennsylvania, and the emerging York University concluded in a recent study that “legal services” was the sector most vulnerable to emerging Artificial Intelligence. Law companies frequently utilize software tailored for legal work and work on top of something like ChatGPT to assist in easing those fears. Legal technology start-ups like Harvey and Case text have created customized software [11].
The Role of Big Data in Medical Malpractice
Big data is simply a lot of data. This is a common phrase to describe the availability and exponential increase of structured and semi-structured data. Big data has value for businesses. Big data is information that is too large to fit into a traditional data system. Big data analytics uses the computational method to gain value from the data and process, store, manage and analyze big data; individuals and companies are developing many tools and technologies. Big data analytics uses machine learning, data mining, and statistics using advanced techniques and tools of analytics on the data obtained from different sources of different sizes. The community will gain from using Big data analytical data by many businesses for operations, maintenance, and management [7]. Big data is a collection of structured, semi-structured, and unstructured data that businesses have gathered and that can be mined for information and used in sophisticated analytics applications like predictive modeling and machine learning [12]. In large data systems, several data kinds may need to be stored and managed alongside one another. Using technologies that offer big data analytics features and capabilities, multiple data science and advanced analytics disciplines can be deployed to run different applications once the data has been obtained and ready for analysis [13].
Data Mining and Predictive Analytics
Data mining is finding patterns and correlations within large data sets for prediction. The process of data mining reveals hidden linkages and offers suggestions for organizational development. Data mining is taking accurate, previously undiscovered, intelligible, and helpful information from sizable databases and using the knowledge to inform essential business choices. Data mining is reviewing a lot of electronic data from many angles and distilling it into valuable knowledge. The current nature of Big data is to analyze a large amount of data for decision-making, and data mining is used for discovering patterns in large data sets in market analysis, corporate analysis, fraud detection, science exploration, sports, astrology, and so on.
Predictive analytics uses data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data. Researchers used surveys, mobile, and electronic health records data to develop their predictive models [14]. Artificial Intelligence and big data work together to analyze data using many methods and techniques to help organizations make decisions [15]. Machine Learning offers a wide range of tools, techniques, and frameworks to reduce healthcare costs and the movement towards personalized healthcare industry challenges in essential areas like electronic record management, data integration, computer-aided diagnoses, and disease predictions [16]. Artificial Intelligence’s nature, which is unnatural, non-human, and hence hazardous to humans, is not clearly understood in its theoretical and practical worlds. Artificial Intelligence is the key to a brighter future that will essentially wipe out humankind in the not-too-distant future. The intuitive and conventional view of natural and artificial separation is based on their severe opposition [17].
Problem Statement, Hypothesis Statements, and Research Questions
Problem Statement
The problem is that currently medical malpractice attorneys rely more on manually collecting, processing, and analyzing data during research legal process which is taking more time and increased the entire cost of handling every legal claim [5].
Hypothesis Statement
If a software-assisted solution can be developed such that time spent on collecting, processing, and analyzing data during legal research process greatly reduced, then the lawyer will spend less time for legal research process.
Research Question
How can the software-assisted solution be developed based on artificial intelligence and big data analytics tools to improve the efficiency of medical malpractice attorneys by greatly reducing time spent on collecting, processing, and analyzing data during the legal research process?
Methodology
Method
The problem addressed in the research was to develop a software-assisted solution based on both artificial intelligence and big data tools to improve the efficiency in the legal sector in medical malpractice by lowering labor-intensive data management, processing, and analysis costs [1] Recent technological advancements had enabled managing complex issues and resolving legal conflicts more consistently [18].
The solution was a process with a corresponding set of tools related to Artificial Intelligence and Big Data. The real-time data was collected, processed, and analyzed using the solution to improve the efficiency of the medical malpractice legal research process. A set of experiments, each runs a few typical use cases, was used to determine whether the proposed software-assisted solution designed in this study improves the classification performance for effectively address the root cause of taking longer time for manual legal research process.
The flowchart shown in Fig. 1 was used to test the solution which was developed based on artificial intelligence and big data analytics tools to assist medical malpractice attorneys greatly to reduce time spent on collecting, processing, and analyzing data during legal research process.
Population and Sample
The target population was the group from whom a sample was taken. Everyone with the necessary qualifications to participate in the doctoral project or dissertation-in-practice was eligible. A population in research was the collection of subjects that you were most interested in; it was frequently the “who” or “what” you hoped to say anything about after the investigation [19]. According to Banerjee and Chaudhury [20], a population is a sizable collection of people who may have the solution to a certain issue in each area. For all of Virginia’s medical malpractice result sets, the study used data from the Department of Health Professions (DHP) of Virginia. The DHP’s goal is to ensure competent and safe patient care through the licensing of health professionals, enforcement of professional standards, and dissemination of knowledge to the general public and other healthcare professionals. The Board of Health Professions, the Prescription Monitoring Program, and the Health Practitioners Monitoring Program are the 13 health regulating boards that makeup DHP in Virginia. Across 62 professions, DHP licenses and oversees more than 500,000 healthcare professionals. To assess the various machine learning models, the data were split into training and testing samples using the Java, Python, and R tools. For the study, there were no interviews or surveys with human participants [21], [22].
A sample was a subset of a given study population and was a group of people or things, such as events, from or about which you collect data [23]. When a portion of a research population was used as a sample, the procedure is known as sampling. As a result, a sampling technique was selecting a subset of a population to test an overall population-based hypothesis [24]. The samples were chosen depending on the population’s characteristics and the goal of our study project. The study took into account sampling scenarios with several variable testing and training ratios for the prediction analysis. By sampling a population, researchers can obtain the same results while spending less time and money. Non-random sampling was significantly less expensive than random sampling due to the lower costs involved in locating and gathering data from individuals. When there was a limited amount of data available, choosing how to split the data between training and testing was challenging. Evaluation of model fit and performance was challenging as a result. I think the 5,000 instances in the dataset for this investigation were enough to avoid sampling issues.
Experiment and Results
Results
We made a quick prototype for our designed software solution by using a combination of Java, Python, R programming and other analysis tools such as Sprinklr, TextBlob [25]. The web scraping and PDF file reader were performed using Java programming and TextBlob library of Phyton. Also, R programming libraries OpenNLP (NLP), Ggplot2 (LDA), MASS (LDA), Wordcloud (text mining) and Tm (text mining) were used for Data analytics.
This solution consists of three types of functionalities needed during handling medical malpractice legal claims, i.e., collecting data, processing data, and analyzing data. We used three Use Cases to test our prototyped software solution. Each Use Case focused on one of three functionalities. Use case 1 focused on collecting data, use case 2 focused on processing data, and use case 3 focused on analyzing data.
Use Case 1: Collecting Data
The first step was to collect all of the information and store it in big data storage for later processing. The solution used web scraping to gather laws and regulations from websites during the initial phase. The laws and regulations were downloaded and saved in big data storage from the official website of the Commonwealth of Virginia’s Virginia Department of Health Professions. The website offers a variety of PDF files, and a web scraper program will be used to extract all the text from them [26]. The Virginia Department of Health Professions official website was used to download and save the rules and regulations in big data storage. Table I shows the automation processing time for the collection of rules and regulations. The solution used JSoup for web scraping. The solution sent HTTP queries using the GET method, which allows to retrieval of data from the server. The solution has the capability for data extraction and manipulation using DOM traversal or CSS selectors. The law books were downloaded from different websites and stored in big data platforms for further processing. The data source gets updated if there is a new dataset available. Table II shows the automation processing time for the collection of textbook data.
Data source type | Data source unit | Processing time |
---|---|---|
DHP Virginia publicly available PDF hyperlinks | 1 link web scraping and reading PDFs | 4 minutes |
DHP Virginia publicly available PDF hyperlinks | 5 links web scraping and reading PDFs | 12 minutes |
DHP Virginia publicly available PDF hyperlinks | 10 links web scraping and reading PDFs | 21 minutes |
DHP Virginia publicly available PDF hyperlinks | 25 links web scraping and reading PDFs | 27 minutes |
DHP Virginia publicly available PDF hyperlinks | 47 links web scraping and reading PDFs | 37 minutes |
Data source type | Data source unit | Processing time |
---|---|---|
Doctor behind bars book | 263 pages | 12–13 minutes |
Model rules of professional conduct book | 316 pages | 14 minutes |
Nurse practitioner’s business practice and legal guide | 538 pages | 17 minutes |
Medical malpractice case report. Cardiology book | 101 pages | 9 minutes |
Doctors and the law | 276 pages | 13 minutes |
The complete past case history is accessible on the DHP Virginia official webpages which was used to download the dataset and then stored in big data storage. Table III shows the automation processing time for the collection of past case history. The relevant training, test, and validation portions of the dataset were made using the pre-processed data. Records from the dataset, which were used for data analysis, were collected one at a time after data purification.
Data source type | Data source unit | Processing time |
---|---|---|
DHP Virginia publicly available records | 1 month records web scraping and cleaning PII data | 12 minutes |
DHP Virginia publicly available records | 6 months records web scraping and cleaning PII data | 27 minutes |
DHP Virginia publicly available records | 1 year records web scraping and cleaning PII data | 35 minutes |
DHP Virginia publicly available records | 3 years records web scraping and cleaning PII data | 70 minutes |
DHP Virginia publicly available records | 5 years records web scraping and cleaning PII data | 90 minutes |
Use Case 2: Processing Data
Since most data in the actual world is unstructured, it is challenging to streamline data processing activities. Furthermore, gathering and storing data has been harder because the Data Generation process never ends. The act of converting unstructured material into a structured format is called text mining, often referred to as text data mining, and it is used to find significant patterns and fresh perspectives. Text mining and NLP were used in conjunction for a variety of tasks in which an analysis is carried out on a pool of user-generated information.
Big Data Processing first gathered and cleansed the data. After obtaining high-quality data, it was utilized for statistical analysis. Information was gathered from a variety of sources in this first stage of big data processing. The unstructured data were converted into structured data and structured data into a comprehensible manner by using various methods for processing large amounts of data [27].
As the dataset was known to contain incorrect values [28], the data were first pre-processed to address any data quality issues. Then, the pre-processed data were used to create the appropriate train, test, and validation portions of the dataset. Any exploratory data analysis seeks to transform data into data that the machine learning model can use to forecast a target variable as accurately as possible. Because the legal software solution worked with extremely sensitive client data that was stored and transferred, it required high-quality software testing. It was essential to be knowledgeable about legal systems and procedures because the legal industry was full of jargon and expressions unique to the field. To guarantee there was enough test coverage, a risk assessment and risk-based testing was done. Precision and privacy were verified with thorough functional testing, including manual and automated tests. Such problems were resolved and each process’ accuracy ensured by performance and security testing carried out in accordance with a well-planned testing methodology.
Data was cleaned and validated to assess and compare the dataset. After the model was trained, additional predictions were made using the test portion of the dataset. The predictions from the test dataset were contrasted with the test data’s actual values. Data science and advanced analytics disciplines were deployed to run different applications once the data were obtained and ready for analysis [13]. Table IV shows the automation processing time for data processing the rules and regulations which was used for data analysis. Table V shows the automation processing time for data processing the textbook data which was used for data analysis. Table VI shows the automation processing time for data processing in the past case history which was used for data analysis.
Data source type | Data source unit | Processing time |
---|---|---|
DHP Virginia publicly available PDF hyperlinks | 1 link web scraping and reading PDFs | 2 minutes |
DHP Virginia publicly available PDF hyperlinks | 5 links web scraping and reading PDFs | 3 minutes |
DHP Virginia publicly available PDF hyperlinks | 10 links web scraping and reading PDFs | 5 minutes |
DHP Virginia publicly available PDF hyperlinks | 25 links web scraping and reading PDFs | 9 minutes |
DHP Virginia publicly available PDF hyperlinks | 47 links web scraping and reading PDFs | 11 minutes |
Data source type | Data source unit | Processing time |
---|---|---|
Doctor behind bars book | 263 pages | 2 minutes |
Model rules of professional conduct book | 316 pages | 4 minutes |
Nurse practitioner’s business practice and legal guide | 538 pages | 7 minutes |
Medical malpractice case report. Cardiology book | 101 pages | 2 minutes |
Doctors and the law | 276 pages | 4 minutes |
Data source type | Data source unit | Processing time |
---|---|---|
DHP Virginia publicly available records | 1 month records web scraping and cleaning PII data | 2 minutes |
DHP Virginia publicly available records | 6 months records web scraping and cleaning PII data | 4 minutes |
DHP Virginia publicly available records | 1 year records web scraping and cleaning PII data | 5 minutes |
DHP Virginia publicly available records | 3 years records web scraping and cleaning PII data | 9 minutes |
DHP Virginia publicly available records | 5 years records web scraping and cleaning PII data | 12 minutes |
Use Case 3: Analyzing Data
Finding trends, patterns, and correlations in vast amounts of unprocessed data to support data-driven decision-making is known as big data analytics. It took time to transform huge data into a useful form. The natural language processing (NLP) and text mining, the two most fascinating subfields of data mining were used in this solution to process the collected dataset and analyzed them. Finding valuable information and insights from vast amounts of text data was the process of natural language processing, also known as text analysis. Numerous techniques and tools were employed to examine unstructured text to identify patterns, trends, and information. The book’s text was tokenized, or divided into words or tokens, and every word were altered to lowercase in order to ensure consistency. The next step was to use lemmatization or stemming techniques to reduce the words to their base or root form.
To find recurrent themes or subjects in the text, topic modeling methods like Latent Dirichlet Allocation (LDA) was used in this solution. LDA generated two primary matrices: the subject-Term Matrix and the Document-Topic Matrix, which displayed the distribution of terms in each subject and each document, respectively [29]. The most crucial details for understanding the topics were found in the Topic-Term Matrix. In this matrix, each row denoted a subject and each column a phrase. The probabilities of a term appearing in a certain topic were represented by the values in the matrix. The solution used the Document-Topic Matrix to allocate each document to one or more topics once the subjects had been determined. This made it possible to identify the themes that each paper tends to focus on. Below was an example of Latent Dirichlet Allocation (LDA) algorithm and a modest collection of documents [29]. Using a corpus of news articles as our example, we found the underlying themes.
Step 1: Data Preparation
Imagine we have a collection of five news articles:
- “We find that policy limits effectively cap recovery in many cases”
- “Baker reports that plaintiffs’ lawyers have a strong norm of not pursuing defendants”
- “Press reports suggest that some physicians employ asset”
- “Number of Texas medical malpractice plaintiffs’ lawyers”
- “Low policy limits thus may serve as a form of defendant self-help—a de facto cap”
Step 2: Document-Term Matrix (DTM)
We start by creating a Document-Term Matrix (DTM) that represents the frequency of words in each document. For simplicity, let’s assume we have already preprocessed the text and created a DTM:
Document policy plaintiff physicians malpractice
Doc 1 4 0 0 0
Doc 2 0 3 0 0
Doc 3 0 2 0 0
Doc 4 0 0 3 0
Doc 5 0 0 0 3
Step 3: LDA Modeling
Now, we apply LDA to this DTM. We specify that we want to discover three topics. LDA estimates the topic-term distribution and document-topic distribution.
Step 4: Number of Topics
In this example, we’ll use three topics.
Step 5: Topic-Term Distribution
LDA outputs a Topic-Term Matrix. For simplicity, let’s assume the resulting matrix looks like this:
Topic 1:
policy: 0.6
plaintiff: 0.1
physicians: 0.1
malpractice: 0.1
Topic 2:
policy: 0.1
plaintiff: 0.6
physicians: 0.1
malpractice: 0.1
Topic 3:
policy: 0.1
plaintiff: 0.1
physicians: 0.6
malpractice: 0.1
Step 6: Topic Interpretation
Based on the words associated with each topic, we can interpret the topics as follows:
- Topic 1: policy
- Topic 2: plaintiff
- Topic 3: physicians
Step 7: Document Assignment
We can now assign each document to one or more topics based on the Document-Topic Matrix. For instance:
- Doc 1 was primarily about policy (high probability for Topic 1).
- Doc 2 was mainly about the plaintiff (high probability for Topic 2).
- Doc 3 was focused on physicians (high probability for Topic 3).
- Doc 4 discusses physicians (with some mentions of policy).
- Doc 5 was about malpractice (with some mentions of policy).
Text mining, which was frequently employed in knowledge-driven companies, was the process of looking through big collections of documents to find new information or support the resolution of certain research problems. Deep learning employed neural networks in an iterative manner that was more flexible and natural than what was supported by traditional machine learning to examine data [30].
As several data sources need to be combined into one model, the concatenation approach was utilized in this solution. Python’s NLTK, a natural language toolkit in conjunction with R programming for subject modeling were used in this solution. All the information gained in the previous steps, including information from books and former court cases stored in big data storage, were collected and the most recent AI models were opening these fields to assess the meanings of incoming text as per put search data and provide expressive, meaningful output. Below Table VII shows quantitative results for the time taken to analyze various medical malpractice test cases.
Data source type | Processing time |
---|---|
Medicine monetary penalty | 3 minutes |
Optometrist reinstatement denied | 2 minutes |
Pharmacy terms terminated | 2 minutes |
Clinical psychologist | 1 minutes |
School psychologist | 2 minutes |
Dentist terms terminated | 2 minutes |
Summary
Different test cases that reflected the different data sources that will be used to give the solution that will be built using big data and artificial intelligence were used to evaluate the artifact. The scholarly literature served as a guide for the creation of the artifact and the issue domain it is designed to solve. Based on theoretical models and the control system, the study’s findings represent an artifact of design science. Although technology is utilized worldwide to solve complicated problems, the medical malpractice industry, for some reason, lags behind in adopting new technologies since doing so carries risk. Medical malpractice claims are challenging legal proceedings that may arise for several causes. Despite the fact that the laws frequently reflect the same legal framework, medical malpractice proceedings are intricate and non-standard, and the specifics of the legal procedure vary from state to state and jurisdiction to jurisdiction [31]. Due to their increased reliance on manual data collection, processing, and analysis during the research legal process, medical malpractice attorneys continue to overbill their clients, which is an issue. In order to expedite the legal research process, the system will automate the data collection, processing, and analysis steps.
The newest development in information technology is artificial intelligence (AI), which can mimic human thinking processes in machines and provides speed, precision, and efficiency. Legal conflicts will be settled using the new technology that is being offered, despite the fact that it is a complex process. Understanding a legal topic is crucial because it informs the public and helps them understand their alternatives when it comes to legal matters. In order to drastically cut down on the amount of time spent obtaining, processing, and analyzing data during the legal research process, the research project aims to design a solution based on artificial intelligence and big data analytics tools. Artificial intelligence is likely to bring about significant changes in the legal profession during the next few years. Artificial intelligence (AI) may generate legal predictions from the legal data set and analyze legal material based on semantics, which can help automate the court system and increase efficiency within an acceptable budget [32].
The purpose of the research project on design science is to create a system that uses big data analytics and artificial intelligence to drastically cut down on the amount of time needed for data collection, processing, and analysis during legal research. In domains as diverse as banking, forest management, national security, and medicine, humans are increasingly depending on artificially intelligent computers to help us make decisions that are more accurate, efficient, and, under the right conditions, equitable [33]. Artificial intelligence (AI) may generate legal predictions from the legal data set and analyze legal material based on semantics, which can help automate the court system and increase efficiency within an acceptable budget [5]. In order to provide prospective software-assisted algorithms, this research design study will gather, go through, and evaluate previous court cases, legal literature, statutes and rules, and social media. It has been suggested that large data sets combined with the right analysis could solve complicated issues quickly [9]. To solve the seemingly labor-intensive problem, big data platforms for storage and specialized AI algorithms were developed. This platform greatly raises the standard of living for the common public by establishing accountability to reduce the needlessly high costs associated with the legal industry. Big Data analytics is the most recent IT revolution. In some situations, the problem of storing large datasets for analysis and outcome prediction can be solved by big data [34]. Artificial intelligence will be used for decision-making, maybe with the help of stronger statistical techniques. Computers that use artificial intelligence may simulate human thought processes, and big data improves speed, accuracy, and efficiency [35].
The study’s conclusions suggest that there may be a way to drastically cut down on the amount of time needed for data collection, processing, and analysis during the legal research process. The artifact was demonstrated in the study along with its suitability for the chosen use cases. Artificial intelligence will be used to make decisions through potentially more powerful statistical techniques. Computers with artificial intelligence can simulate human thought processes, and big data improves speed, accuracy, and efficiency. To recommend the appropriate course of action for the legal issues, the artificial intelligence platform processes thousands of samples and feeds back past occurrences into the Big Data platform. When data scientists and intelligent algorithms work together to extract insightful knowledge from massive amounts of data, big data analytics becomes even more amazing [36]. Predictive analytics uses data, statistical algorithms, and machine learning techniques to estimate the probability of future events based on historical data. The quantitative results produced by the methodology show how helpful a solution like this will be. With this approach, the time spent obtaining, processing, and evaluating data for legal research would be greatly decreased. Additionally, the core reason of overbilling clients, a problem that many medical malpractice attorneys currently face can be addressed [6].
Conclusion
Benefits from the study’s applicability were seen in both academic and real-world contexts. In terms of academic research, the study advanced understanding by advancing understanding-related investigations and adding to the body of information regarding the value generation of big data. The artifact generated by this research is located in the early development phase of the design life cycle. The artifact may be expanded upon in future research projects using more design science techniques and case studies. By mapping the relationships between big data elements both internally and externally, as well as between the elements and one another, further extension of the artifact using design science and case studies can increase the body of knowledge pertaining to big data value creation.
A mission or product SDLC’s design, development, testing, or implementation phases could be used to expand on the item this study produced. The volume of knowledge on the analysis needed to assess the remedy by medical malpractice attorneys can be increased by further extending the artifact using DSR and case study approaches. Technology is making it possible to address more important problems faster, requiring fewer individuals to finish tasks. Therefore, social media, public data, state laws, and previous cases can all be used to settle medical malpractice claims. The impact of malpractice reforms on healthcare spending and individual health is significant, and comparative blame reform appears to be associated with increased costs. It has been suggested that given large data sets and the right analysis, simple answers to complex problems may be obtained. An artificial intelligence platform reads past instances into a big data platform, processes thousands of examples to fine-tune the algorithm, and recommends the best course of action for legal challenges. Big data analytics is enhanced when sophisticated algorithms and data scientists work together to uncover valuable insights from vast amounts of data.
References
-
Nepps ME. The basics of medical malpractice: a primer on navigating the system. Chest. 2008;134(5):1051–5.
Google Scholar
1
-
Grams R. The progress of an american EHR-part 1. J Med Syst. 2012;36(5):3077–8. doi: 10.1007/s10916-011-9784-0.
Google Scholar
2
-
Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. Artif Intell Healthcare. 2020;2020:25–60. doi: 10.1016/B978-0-12-818438-7.00002-2. Epub 2020 Jun 26. PMCID: PMC7325854.
Google Scholar
3
-
Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94–8. doi: 10.7861/futurehosp. 6-2-94. PMID: 31363513; PMCID: PMC6616181.
Google Scholar
4
-
Sil R, Roy A, Bhushan B, Mazumdar AK. Artificial intelligence and machine learning based legal application: the state-of-the art and future research trends. 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 57–62. IEEE; 2019.
Google Scholar
5
-
Lang MB. Explaining the Unexplainable: Medical Decision-Making, AI, and a Right to Explanation.McGill University (Canada); 2022.
Google Scholar
6
-
Zhong H, Xiao C, Tu C, Zhang T, Liu Z, Sun M. How does NLP benefit legal system: a summary of legal artificial intelligence. 2020. arXiv preprint arXiv: 2004.12158.
Google Scholar
7
-
Bessen JE, Meurer MJ. The Private Costs of Patent Litigation. Boston University School of Law Working Paper; 2008. pp. 7–8.
Google Scholar
8
-
Luu T. Reducing the costs of civil litigation, public law research institute. 2004. Available from: https://gov.uchastings.edu/publiclaw/docs/plri/cstslit.pdf.
Google Scholar
9
-
Satapathy A. Applications of assistive tools and technologies in enhancing the learning abilities of dyslexic children. Techno Learn. 2019;9(2):117–23. doi: 10.30954/2231-4105.02.2019.9.
Google Scholar
10
-
Lohr S. A.I. is coming for lawyers, again. 2023. Available from: https://www.nytimes.com/2023/04/10/technology/ai-is-coming-forlawyers-again.html.
Google Scholar
11
-
Rahul K, Banyal RK. Data life cycle management in big data analytics. Procedia Comput Sci. 2020;173:364–71. doi: 10.1016/j.procs.2020.06.042.
Google Scholar
12
-
Li S, Yu H. Big data and financial information analytics ecosystem: strengthening personal information under legal regulation. Inf Syst E-Bus Manag. 2019;18(4):891–909. doi: 10.1007/s10257-019-00404-z.
Google Scholar
13
-
Shmueli G, Koppius O. Predictive analytics in information systems research. Mis Quart. 2011;35(3):553–72. doi: 10.2307/23042796.
Google Scholar
14
-
Das N, Das L, Rautaray SS, Pandey M. Big data analytics for medical applications. Int J Modern Edu Comput Sci. 2018;11(2):35. doi: 10.5815/ijmecs.2018.02.04.
Google Scholar
15
-
Nithya B, Ilango V. Predictive analytics in health care using machine learning tools and techniques. 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 492–9. 2017. doi: 10.1109/ICCONS.2017.8250771.
Google Scholar
16
-
Havlík V. The Naturalness of Artificial Intelligence from the Evolutionary Perspective. A.I. & Society; 2018. pp. 1–10. doi: 10.1007/s00146-018-0829-5.
Google Scholar
17
-
Bal BS. An introduction to medical malpractice in the United States. Clin Oorthop Orthop Rrelated Rresearch. 2009;467(2):339–47. doi: 10.1007/s11999-008-0636-2.
Google Scholar
18
-
Creswell JW. Application of mixed-methods research designs to trauma research. 2009.
Google Scholar
19
-
Banerjee A, Chaudhury S. Statistics without tears: populations and samples. Ind Psychiatry J. 2010;19(1):60–5. doi: 10.4103/0972-6748.77642.
Google Scholar
20
-
Ibrahim M. Reducing correlation of random forest-based learning-to-rank algorithms using subsample size. Comput Intell. 2019;35(4):774–98. doi: 10.1111/coin.12213.
Google Scholar
21
-
Liu R, Chen Y, Wu J, Gao L, Barrett D, Xu T, et al. Integrating entropy-based naïve bayes and gis for spatial evaluation of flood hazard. Risk Anal. 2016;37(4):756–73. doi: 10.1111/risa.12698.
Google Scholar
22
-
Emerson RW. Convenience sampling, random sampling, and snowball sampling: how does sampling affect the validity of research? J Visual Impair Blin. 2015;109(2):164–8.
Google Scholar
23
-
Johnson G. A Quantitative Study of the Resultant Differences Between Additive Practices and Reductive Practices in Data Requirements Gathering. Colorado Technical University; 2016.
Google Scholar
24
-
Nagpal A, Gabrani G. Python for data analytics, scientific and technical applications. 2019 Amity International Conference on Artificial Intelligence (AICAI), pp. 140–5, IEEE; 2019, February.
Google Scholar
25
-
Khder MA. Web scraping or web crawling: state of art, techniques, approaches and application. Int J Adv Soft Comput Appl. 2021;13(3):144–68.
Google Scholar
26
-
Gunawan R, Rahmatulloh A, Darmawan I, Firdaus F. Comparison of web scraping techniques: regular expression, HTML DOM and Xpath. 2018 International Conference on Industrial Enterprise and System Engineering (ICoIESE 2018), pp. 283–7, Atlantis Press; 2019, March.
Google Scholar
27
-
Guan Y, Plötz T. Ensembles of deep LSTM learners for activity recognition using wearables. Proc ACM on Interact, Mobile, Wear Ubiquitous Technol. 2017;1(2):1–28. doi: 10.1145/3090076.
Google Scholar
28
-
Kherwa P, Bansal P. Topic modeling: a comprehensive review. EAI Endorsed Trans Scalable Inf Syst. 2019;7(24):1–16.
Google Scholar
29
-
Davenport TH. From analytics to artificial intelligence. J Bus Anal. 2018;1(2):73–80.
Google Scholar
30
-
Wang F, Krishnan SK. Medical malpractice claims within cardiology from 2006 to 2015. Am J Cardiol. 2019;123(1):164–8.
Google Scholar
31
-
Zhang X, Wang Y. Research on intelligent medical big data system based on Hadoop and blockchain. Eurasip J Wirel Comm Netw. 2021;2021(1):1–21.
Google Scholar
32
-
Frees EW, Gao L. Predictive analytics and medical malpractice. North Am Actuar J. 2020;24(2):211–27.
Google Scholar
33
-
Ghavami P. Big Data Analytics Methods: analytics Techniques in Data Mining, Deep Learning and Natural Language Processing. Walter de Gruyter GmbH & Co KG; 2019.
Google Scholar
34
-
Chen M, Mao S, Liu Y. Big data: a survey. Mobile Netw Appl. 2014;19(2):171–209.
Google Scholar
35
-
Gray TR. Medical liability insurance data analytics: an opportunity to identify risks, target interventions and impact policy. In Health Informatics. Productivity Press, 2022, pp. 407–15.
Google Scholar
36